Jan 14 01:20:29.614859 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 13 22:26:24 -00 2026 Jan 14 01:20:29.614881 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ef461ed71f713584f576c99df12ffb04dd99b33cd2d16edeb307d0cf2f5b4260 Jan 14 01:20:29.614892 kernel: BIOS-provided physical RAM map: Jan 14 01:20:29.614898 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 14 01:20:29.614904 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 14 01:20:29.614910 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 14 01:20:29.614917 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 14 01:20:29.614923 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 14 01:20:29.614929 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 14 01:20:29.614935 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 14 01:20:29.614943 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 14 01:20:29.614949 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 14 01:20:29.614955 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 14 01:20:29.614961 kernel: NX (Execute Disable) protection: active Jan 14 01:20:29.614969 kernel: APIC: Static calls initialized Jan 14 01:20:29.614977 kernel: SMBIOS 2.8 present. Jan 14 01:20:29.614984 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 14 01:20:29.614990 kernel: DMI: Memory slots populated: 1/1 Jan 14 01:20:29.614996 kernel: Hypervisor detected: KVM Jan 14 01:20:29.615003 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 14 01:20:29.615009 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 14 01:20:29.615015 kernel: kvm-clock: using sched offset of 4489139444 cycles Jan 14 01:20:29.615023 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 14 01:20:29.615030 kernel: tsc: Detected 2445.424 MHz processor Jan 14 01:20:29.615039 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 14 01:20:29.615046 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 14 01:20:29.615053 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 14 01:20:29.615060 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 14 01:20:29.615067 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 14 01:20:29.615074 kernel: Using GB pages for direct mapping Jan 14 01:20:29.615081 kernel: ACPI: Early table checksum verification disabled Jan 14 01:20:29.615089 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 14 01:20:29.615096 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 01:20:29.615103 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 01:20:29.615110 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 01:20:29.615116 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 14 01:20:29.615123 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 01:20:29.615130 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 01:20:29.615139 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 01:20:29.615190 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 01:20:29.615201 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 14 01:20:29.615208 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 14 01:20:29.615215 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 14 01:20:29.615222 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 14 01:20:29.615231 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 14 01:20:29.615239 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 14 01:20:29.615245 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 14 01:20:29.615252 kernel: No NUMA configuration found Jan 14 01:20:29.615259 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 14 01:20:29.615266 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jan 14 01:20:29.615276 kernel: Zone ranges: Jan 14 01:20:29.615283 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 14 01:20:29.615290 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 14 01:20:29.615297 kernel: Normal empty Jan 14 01:20:29.615304 kernel: Device empty Jan 14 01:20:29.615311 kernel: Movable zone start for each node Jan 14 01:20:29.615318 kernel: Early memory node ranges Jan 14 01:20:29.615325 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 14 01:20:29.615333 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 14 01:20:29.615341 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 14 01:20:29.615348 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 14 01:20:29.615355 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 14 01:20:29.615362 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 14 01:20:29.615369 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 14 01:20:29.615376 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 14 01:20:29.615385 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 14 01:20:29.615392 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 14 01:20:29.615399 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 14 01:20:29.615406 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 14 01:20:29.615413 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 14 01:20:29.615420 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 14 01:20:29.615427 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 14 01:20:29.615435 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 14 01:20:29.615443 kernel: TSC deadline timer available Jan 14 01:20:29.615450 kernel: CPU topo: Max. logical packages: 1 Jan 14 01:20:29.615457 kernel: CPU topo: Max. logical dies: 1 Jan 14 01:20:29.615464 kernel: CPU topo: Max. dies per package: 1 Jan 14 01:20:29.615471 kernel: CPU topo: Max. threads per core: 1 Jan 14 01:20:29.615478 kernel: CPU topo: Num. cores per package: 4 Jan 14 01:20:29.615485 kernel: CPU topo: Num. threads per package: 4 Jan 14 01:20:29.615492 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 14 01:20:29.615550 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 14 01:20:29.615559 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 14 01:20:29.615567 kernel: kvm-guest: setup PV sched yield Jan 14 01:20:29.615574 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 14 01:20:29.615581 kernel: Booting paravirtualized kernel on KVM Jan 14 01:20:29.615588 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 14 01:20:29.615595 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 14 01:20:29.615605 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 14 01:20:29.615612 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 14 01:20:29.615619 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 14 01:20:29.615626 kernel: kvm-guest: PV spinlocks enabled Jan 14 01:20:29.615634 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 14 01:20:29.615642 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ef461ed71f713584f576c99df12ffb04dd99b33cd2d16edeb307d0cf2f5b4260 Jan 14 01:20:29.615649 kernel: random: crng init done Jan 14 01:20:29.615659 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 14 01:20:29.615666 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 14 01:20:29.615673 kernel: Fallback order for Node 0: 0 Jan 14 01:20:29.615680 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jan 14 01:20:29.615688 kernel: Policy zone: DMA32 Jan 14 01:20:29.615695 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 01:20:29.615702 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 14 01:20:29.615711 kernel: ftrace: allocating 40128 entries in 157 pages Jan 14 01:20:29.615719 kernel: ftrace: allocated 157 pages with 5 groups Jan 14 01:20:29.615726 kernel: Dynamic Preempt: voluntary Jan 14 01:20:29.615733 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 01:20:29.615741 kernel: rcu: RCU event tracing is enabled. Jan 14 01:20:29.615749 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 14 01:20:29.615756 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 01:20:29.615763 kernel: Rude variant of Tasks RCU enabled. Jan 14 01:20:29.615772 kernel: Tracing variant of Tasks RCU enabled. Jan 14 01:20:29.615779 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 01:20:29.615787 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 14 01:20:29.615793 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 14 01:20:29.615801 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 14 01:20:29.615809 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 14 01:20:29.615816 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 14 01:20:29.615826 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 01:20:29.615840 kernel: Console: colour VGA+ 80x25 Jan 14 01:20:29.615849 kernel: printk: legacy console [ttyS0] enabled Jan 14 01:20:29.615857 kernel: ACPI: Core revision 20240827 Jan 14 01:20:29.615864 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 14 01:20:29.615872 kernel: APIC: Switch to symmetric I/O mode setup Jan 14 01:20:29.615879 kernel: x2apic enabled Jan 14 01:20:29.615886 kernel: APIC: Switched APIC routing to: physical x2apic Jan 14 01:20:29.615894 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 14 01:20:29.615903 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 14 01:20:29.615911 kernel: kvm-guest: setup PV IPIs Jan 14 01:20:29.615918 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 14 01:20:29.615926 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd5e8294, max_idle_ns: 440795237246 ns Jan 14 01:20:29.615935 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Jan 14 01:20:29.615943 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 14 01:20:29.615950 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 14 01:20:29.615958 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 14 01:20:29.615965 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 14 01:20:29.615973 kernel: Spectre V2 : Mitigation: Retpolines Jan 14 01:20:29.615980 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 14 01:20:29.615990 kernel: Speculative Store Bypass: Vulnerable Jan 14 01:20:29.615997 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 14 01:20:29.616005 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 14 01:20:29.616013 kernel: active return thunk: srso_alias_return_thunk Jan 14 01:20:29.616020 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 14 01:20:29.616028 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 14 01:20:29.616035 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 01:20:29.616045 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 14 01:20:29.616053 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 14 01:20:29.616060 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 14 01:20:29.616067 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 14 01:20:29.616075 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 14 01:20:29.616082 kernel: Freeing SMP alternatives memory: 32K Jan 14 01:20:29.616090 kernel: pid_max: default: 32768 minimum: 301 Jan 14 01:20:29.616099 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 14 01:20:29.616106 kernel: landlock: Up and running. Jan 14 01:20:29.616114 kernel: SELinux: Initializing. Jan 14 01:20:29.616121 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 14 01:20:29.616129 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 14 01:20:29.616136 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 14 01:20:29.616177 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 14 01:20:29.616188 kernel: signal: max sigframe size: 1776 Jan 14 01:20:29.616195 kernel: rcu: Hierarchical SRCU implementation. Jan 14 01:20:29.616203 kernel: rcu: Max phase no-delay instances is 400. Jan 14 01:20:29.616210 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 14 01:20:29.616217 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 14 01:20:29.616225 kernel: smp: Bringing up secondary CPUs ... Jan 14 01:20:29.616232 kernel: smpboot: x86: Booting SMP configuration: Jan 14 01:20:29.616241 kernel: .... node #0, CPUs: #1 #2 #3 Jan 14 01:20:29.616249 kernel: smp: Brought up 1 node, 4 CPUs Jan 14 01:20:29.616256 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Jan 14 01:20:29.616264 kernel: Memory: 2445292K/2571752K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15536K init, 2500K bss, 120520K reserved, 0K cma-reserved) Jan 14 01:20:29.616272 kernel: devtmpfs: initialized Jan 14 01:20:29.616279 kernel: x86/mm: Memory block size: 128MB Jan 14 01:20:29.616286 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 01:20:29.616296 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 14 01:20:29.616303 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 01:20:29.616311 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 01:20:29.616318 kernel: audit: initializing netlink subsys (disabled) Jan 14 01:20:29.616326 kernel: audit: type=2000 audit(1768353624.981:1): state=initialized audit_enabled=0 res=1 Jan 14 01:20:29.616333 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 01:20:29.616340 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 14 01:20:29.616350 kernel: cpuidle: using governor menu Jan 14 01:20:29.616357 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 01:20:29.616365 kernel: dca service started, version 1.12.1 Jan 14 01:20:29.616372 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 14 01:20:29.616380 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 14 01:20:29.616387 kernel: PCI: Using configuration type 1 for base access Jan 14 01:20:29.616395 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 14 01:20:29.616404 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 01:20:29.616412 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 01:20:29.616419 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 01:20:29.616427 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 01:20:29.616434 kernel: ACPI: Added _OSI(Module Device) Jan 14 01:20:29.616441 kernel: ACPI: Added _OSI(Processor Device) Jan 14 01:20:29.616449 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 01:20:29.616458 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 01:20:29.616465 kernel: ACPI: Interpreter enabled Jan 14 01:20:29.616473 kernel: ACPI: PM: (supports S0 S3 S5) Jan 14 01:20:29.616481 kernel: ACPI: Using IOAPIC for interrupt routing Jan 14 01:20:29.616488 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 14 01:20:29.616495 kernel: PCI: Using E820 reservations for host bridge windows Jan 14 01:20:29.616549 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 14 01:20:29.616557 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 14 01:20:29.616796 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 14 01:20:29.616980 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 14 01:20:29.625094 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 14 01:20:29.625110 kernel: PCI host bridge to bus 0000:00 Jan 14 01:20:29.625343 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 14 01:20:29.625583 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 14 01:20:29.625750 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 14 01:20:29.625906 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 14 01:20:29.626061 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 14 01:20:29.626268 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 14 01:20:29.626426 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 14 01:20:29.626707 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 14 01:20:29.626890 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 14 01:20:29.627065 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 14 01:20:29.627284 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 14 01:20:29.627452 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 14 01:20:29.627721 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 14 01:20:29.627903 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 14 01:20:29.628072 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jan 14 01:20:29.628296 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 14 01:20:29.628464 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 14 01:20:29.628786 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 14 01:20:29.628964 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jan 14 01:20:29.629132 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 14 01:20:29.629358 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 14 01:20:29.629599 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 14 01:20:29.629772 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jan 14 01:20:29.629945 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jan 14 01:20:29.630111 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 14 01:20:29.630330 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 14 01:20:29.630574 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 14 01:20:29.630788 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 14 01:20:29.631015 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 14 01:20:29.631251 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jan 14 01:20:29.631574 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jan 14 01:20:29.631768 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 14 01:20:29.631979 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 14 01:20:29.631992 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 14 01:20:29.632000 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 14 01:20:29.632013 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 14 01:20:29.632020 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 14 01:20:29.632028 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 14 01:20:29.632036 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 14 01:20:29.632043 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 14 01:20:29.632050 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 14 01:20:29.632058 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 14 01:20:29.632068 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 14 01:20:29.632075 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 14 01:20:29.632082 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 14 01:20:29.632090 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 14 01:20:29.632097 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 14 01:20:29.632105 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 14 01:20:29.632113 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 14 01:20:29.632123 kernel: iommu: Default domain type: Translated Jan 14 01:20:29.632130 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 14 01:20:29.632138 kernel: PCI: Using ACPI for IRQ routing Jan 14 01:20:29.632192 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 14 01:20:29.632200 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 14 01:20:29.632208 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 14 01:20:29.632387 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 14 01:20:29.632631 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 14 01:20:29.632804 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 14 01:20:29.632816 kernel: vgaarb: loaded Jan 14 01:20:29.632824 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 14 01:20:29.632831 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 14 01:20:29.632839 kernel: clocksource: Switched to clocksource kvm-clock Jan 14 01:20:29.632846 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 01:20:29.632859 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 01:20:29.632866 kernel: pnp: PnP ACPI init Jan 14 01:20:29.633050 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 14 01:20:29.633061 kernel: pnp: PnP ACPI: found 6 devices Jan 14 01:20:29.633069 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 14 01:20:29.633077 kernel: NET: Registered PF_INET protocol family Jan 14 01:20:29.633087 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 14 01:20:29.633095 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 14 01:20:29.633103 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 01:20:29.633111 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 14 01:20:29.633118 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 14 01:20:29.633126 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 14 01:20:29.633133 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 14 01:20:29.633185 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 14 01:20:29.633194 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 01:20:29.633202 kernel: NET: Registered PF_XDP protocol family Jan 14 01:20:29.633363 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 14 01:20:29.637347 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 14 01:20:29.640710 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 14 01:20:29.640877 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 14 01:20:29.641041 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 14 01:20:29.641245 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 14 01:20:29.641256 kernel: PCI: CLS 0 bytes, default 64 Jan 14 01:20:29.641265 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd5e8294, max_idle_ns: 440795237246 ns Jan 14 01:20:29.641273 kernel: Initialise system trusted keyrings Jan 14 01:20:29.641281 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 14 01:20:29.641289 kernel: Key type asymmetric registered Jan 14 01:20:29.641300 kernel: Asymmetric key parser 'x509' registered Jan 14 01:20:29.641308 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 14 01:20:29.641316 kernel: io scheduler mq-deadline registered Jan 14 01:20:29.641323 kernel: io scheduler kyber registered Jan 14 01:20:29.641331 kernel: io scheduler bfq registered Jan 14 01:20:29.641339 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 14 01:20:29.641347 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 14 01:20:29.641357 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 14 01:20:29.641365 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 14 01:20:29.641373 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 01:20:29.641380 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 14 01:20:29.641388 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 14 01:20:29.641396 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 14 01:20:29.641404 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 14 01:20:29.641646 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 14 01:20:29.641661 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 14 01:20:29.641827 kernel: rtc_cmos 00:04: registered as rtc0 Jan 14 01:20:29.641989 kernel: rtc_cmos 00:04: setting system clock to 2026-01-14T01:20:27 UTC (1768353627) Jan 14 01:20:29.642203 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 14 01:20:29.642216 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 14 01:20:29.642228 kernel: NET: Registered PF_INET6 protocol family Jan 14 01:20:29.642236 kernel: Segment Routing with IPv6 Jan 14 01:20:29.642244 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 01:20:29.642252 kernel: NET: Registered PF_PACKET protocol family Jan 14 01:20:29.642259 kernel: Key type dns_resolver registered Jan 14 01:20:29.642267 kernel: IPI shorthand broadcast: enabled Jan 14 01:20:29.642275 kernel: sched_clock: Marking stable (2358024929, 436236170)->(2985517576, -191256477) Jan 14 01:20:29.642285 kernel: registered taskstats version 1 Jan 14 01:20:29.642292 kernel: Loading compiled-in X.509 certificates Jan 14 01:20:29.642300 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: e43fcdb17feb86efe6ca4b76910b93467fb95f4f' Jan 14 01:20:29.642308 kernel: Demotion targets for Node 0: null Jan 14 01:20:29.642315 kernel: Key type .fscrypt registered Jan 14 01:20:29.642323 kernel: Key type fscrypt-provisioning registered Jan 14 01:20:29.642331 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 01:20:29.642341 kernel: ima: Allocated hash algorithm: sha1 Jan 14 01:20:29.642348 kernel: ima: No architecture policies found Jan 14 01:20:29.642355 kernel: clk: Disabling unused clocks Jan 14 01:20:29.642363 kernel: Freeing unused kernel image (initmem) memory: 15536K Jan 14 01:20:29.642371 kernel: Write protecting the kernel read-only data: 47104k Jan 14 01:20:29.642378 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 14 01:20:29.642386 kernel: Run /init as init process Jan 14 01:20:29.642393 kernel: with arguments: Jan 14 01:20:29.642403 kernel: /init Jan 14 01:20:29.642411 kernel: with environment: Jan 14 01:20:29.642418 kernel: HOME=/ Jan 14 01:20:29.642425 kernel: TERM=linux Jan 14 01:20:29.642433 kernel: SCSI subsystem initialized Jan 14 01:20:29.642440 kernel: libata version 3.00 loaded. Jan 14 01:20:29.642673 kernel: ahci 0000:00:1f.2: version 3.0 Jan 14 01:20:29.642690 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 14 01:20:29.642861 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 14 01:20:29.643029 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 14 01:20:29.643243 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 14 01:20:29.643437 kernel: scsi host0: ahci Jan 14 01:20:29.643708 kernel: scsi host1: ahci Jan 14 01:20:29.643898 kernel: scsi host2: ahci Jan 14 01:20:29.644079 kernel: scsi host3: ahci Jan 14 01:20:29.644324 kernel: scsi host4: ahci Jan 14 01:20:29.644596 kernel: scsi host5: ahci Jan 14 01:20:29.644610 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Jan 14 01:20:29.644623 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Jan 14 01:20:29.644631 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Jan 14 01:20:29.644639 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Jan 14 01:20:29.644647 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Jan 14 01:20:29.644655 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Jan 14 01:20:29.644663 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 14 01:20:29.644671 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 14 01:20:29.644685 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 14 01:20:29.644693 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 14 01:20:29.644701 kernel: ata3.00: LPM support broken, forcing max_power Jan 14 01:20:29.644709 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 14 01:20:29.644717 kernel: ata3.00: applying bridge limits Jan 14 01:20:29.644725 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 14 01:20:29.644732 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 14 01:20:29.644742 kernel: ata3.00: LPM support broken, forcing max_power Jan 14 01:20:29.644750 kernel: ata3.00: configured for UDMA/100 Jan 14 01:20:29.645003 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 14 01:20:29.645238 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 14 01:20:29.645412 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 14 01:20:29.645423 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 14 01:20:29.645436 kernel: GPT:16515071 != 27000831 Jan 14 01:20:29.645444 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 14 01:20:29.645452 kernel: GPT:16515071 != 27000831 Jan 14 01:20:29.645459 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 14 01:20:29.645467 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 14 01:20:29.645719 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 14 01:20:29.645732 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 14 01:20:29.645927 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 14 01:20:29.645939 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 01:20:29.645948 kernel: device-mapper: uevent: version 1.0.3 Jan 14 01:20:29.645956 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 14 01:20:29.645993 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 14 01:20:29.646023 kernel: raid6: avx2x4 gen() 36927 MB/s Jan 14 01:20:29.646036 kernel: raid6: avx2x2 gen() 35865 MB/s Jan 14 01:20:29.646044 kernel: raid6: avx2x1 gen() 26926 MB/s Jan 14 01:20:29.646052 kernel: raid6: using algorithm avx2x4 gen() 36927 MB/s Jan 14 01:20:29.646080 kernel: raid6: .... xor() 4670 MB/s, rmw enabled Jan 14 01:20:29.646089 kernel: raid6: using avx2x2 recovery algorithm Jan 14 01:20:29.646118 kernel: xor: automatically using best checksumming function avx Jan 14 01:20:29.646126 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 01:20:29.646135 kernel: BTRFS: device fsid cd6116b6-e1b6-44f4-b1e2-5e7c5565b295 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (181) Jan 14 01:20:29.646182 kernel: BTRFS info (device dm-0): first mount of filesystem cd6116b6-e1b6-44f4-b1e2-5e7c5565b295 Jan 14 01:20:29.646192 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 14 01:20:29.646200 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 01:20:29.646208 kernel: BTRFS info (device dm-0): enabling free space tree Jan 14 01:20:29.646218 kernel: loop: module loaded Jan 14 01:20:29.646227 kernel: loop0: detected capacity change from 0 to 100544 Jan 14 01:20:29.646235 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 14 01:20:29.646244 systemd[1]: Successfully made /usr/ read-only. Jan 14 01:20:29.646255 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 01:20:29.646264 systemd[1]: Detected virtualization kvm. Jan 14 01:20:29.646275 systemd[1]: Detected architecture x86-64. Jan 14 01:20:29.646283 systemd[1]: Running in initrd. Jan 14 01:20:29.646292 systemd[1]: No hostname configured, using default hostname. Jan 14 01:20:29.646300 systemd[1]: Hostname set to . Jan 14 01:20:29.646308 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 14 01:20:29.646319 systemd[1]: Queued start job for default target initrd.target. Jan 14 01:20:29.646330 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 01:20:29.646338 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 01:20:29.646347 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 01:20:29.646356 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 01:20:29.646364 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 01:20:29.646373 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 01:20:29.646384 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 01:20:29.646392 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 01:20:29.646401 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 01:20:29.646409 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 14 01:20:29.646417 systemd[1]: Reached target paths.target - Path Units. Jan 14 01:20:29.646425 systemd[1]: Reached target slices.target - Slice Units. Jan 14 01:20:29.646433 systemd[1]: Reached target swap.target - Swaps. Jan 14 01:20:29.646444 systemd[1]: Reached target timers.target - Timer Units. Jan 14 01:20:29.646453 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 01:20:29.646461 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 01:20:29.646470 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 01:20:29.646478 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 01:20:29.646486 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 14 01:20:29.646494 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 01:20:29.646573 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 01:20:29.646582 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 01:20:29.646591 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 01:20:29.646662 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 01:20:29.646671 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 01:20:29.646680 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 01:20:29.646692 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 01:20:29.646701 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 14 01:20:29.646709 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 01:20:29.646720 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 01:20:29.646728 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 01:20:29.646739 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:20:29.646747 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 01:20:29.646756 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 01:20:29.646793 systemd-journald[319]: Collecting audit messages is enabled. Jan 14 01:20:29.646818 kernel: audit: type=1130 audit(1768353629.616:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:29.646827 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 01:20:29.646835 kernel: audit: type=1130 audit(1768353629.630:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:29.646845 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 01:20:29.646856 systemd-journald[319]: Journal started Jan 14 01:20:29.646874 systemd-journald[319]: Runtime Journal (/run/log/journal/4d59ca1171374e269ccbe3939beadb1a) is 6M, max 48.2M, 42.1M free. Jan 14 01:20:29.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:29.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:29.655609 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 01:20:29.659440 systemd-modules-load[320]: Inserted module 'br_netfilter' Jan 14 01:20:29.840354 kernel: Bridge firewalling registered Jan 14 01:20:29.840380 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 01:20:29.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:29.844646 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 01:20:29.851998 kernel: audit: type=1130 audit(1768353629.843:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:29.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:29.867678 kernel: audit: type=1130 audit(1768353629.858:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:29.876126 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:20:29.892832 kernel: audit: type=1130 audit(1768353629.875:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:29.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:29.894605 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 01:20:29.904457 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 01:20:29.905827 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 01:20:29.926876 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 01:20:29.942943 kernel: audit: type=1130 audit(1768353629.926:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:29.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:29.930732 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 01:20:29.954873 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 01:20:29.971895 kernel: audit: type=1130 audit(1768353629.958:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:29.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:29.972194 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 01:20:29.973432 systemd-tmpfiles[336]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 14 01:20:30.004212 kernel: audit: type=1130 audit(1768353629.975:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:30.004244 kernel: audit: type=1130 audit(1768353629.986:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:29.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:29.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:29.979582 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 01:20:30.004336 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 01:20:30.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:30.016685 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 01:20:30.017000 audit: BPF prog-id=6 op=LOAD Jan 14 01:20:30.018949 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 01:20:30.062752 dracut-cmdline[355]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ef461ed71f713584f576c99df12ffb04dd99b33cd2d16edeb307d0cf2f5b4260 Jan 14 01:20:30.106250 systemd-resolved[356]: Positive Trust Anchors: Jan 14 01:20:30.106284 systemd-resolved[356]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 01:20:30.106288 systemd-resolved[356]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 01:20:30.106315 systemd-resolved[356]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 01:20:30.126676 systemd-resolved[356]: Defaulting to hostname 'linux'. Jan 14 01:20:30.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:30.127863 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 01:20:30.149409 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 01:20:30.251632 kernel: Loading iSCSI transport class v2.0-870. Jan 14 01:20:30.268625 kernel: iscsi: registered transport (tcp) Jan 14 01:20:30.295461 kernel: iscsi: registered transport (qla4xxx) Jan 14 01:20:30.295588 kernel: QLogic iSCSI HBA Driver Jan 14 01:20:30.330252 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 01:20:30.356621 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 01:20:30.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:30.368941 kernel: kauditd_printk_skb: 3 callbacks suppressed Jan 14 01:20:30.368970 kernel: audit: type=1130 audit(1768353630.365:14): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:30.368865 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 01:20:30.437433 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 01:20:30.453486 kernel: audit: type=1130 audit(1768353630.440:15): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:30.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:30.442935 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 01:20:30.472421 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 01:20:30.513078 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 01:20:30.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:30.529626 kernel: audit: type=1130 audit(1768353630.520:16): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:30.529000 audit: BPF prog-id=7 op=LOAD Jan 14 01:20:30.531270 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 01:20:30.541415 kernel: audit: type=1334 audit(1768353630.529:17): prog-id=7 op=LOAD Jan 14 01:20:30.541442 kernel: audit: type=1334 audit(1768353630.529:18): prog-id=8 op=LOAD Jan 14 01:20:30.529000 audit: BPF prog-id=8 op=LOAD Jan 14 01:20:30.569865 systemd-udevd[592]: Using default interface naming scheme 'v257'. Jan 14 01:20:30.585967 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 01:20:30.603908 kernel: audit: type=1130 audit(1768353630.590:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:30.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:30.603695 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 01:20:30.636701 dracut-pre-trigger[654]: rd.md=0: removing MD RAID activation Jan 14 01:20:30.666459 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 01:20:30.687127 kernel: audit: type=1130 audit(1768353630.670:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:30.687204 kernel: audit: type=1334 audit(1768353630.671:21): prog-id=9 op=LOAD Jan 14 01:20:30.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:30.671000 audit: BPF prog-id=9 op=LOAD Jan 14 01:20:30.672943 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 01:20:30.697616 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 01:20:30.713664 kernel: audit: type=1130 audit(1768353630.701:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:30.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:30.712693 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 01:20:30.762430 systemd-networkd[726]: lo: Link UP Jan 14 01:20:30.762463 systemd-networkd[726]: lo: Gained carrier Jan 14 01:20:30.768290 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 01:20:30.784207 kernel: audit: type=1130 audit(1768353630.771:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:30.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:30.772633 systemd[1]: Reached target network.target - Network. Jan 14 01:20:30.826646 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 01:20:30.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:30.837188 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 01:20:30.903577 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 14 01:20:30.945339 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 14 01:20:30.953853 kernel: cryptd: max_cpu_qlen set to 1000 Jan 14 01:20:30.981124 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 14 01:20:30.993847 systemd-networkd[726]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:20:31.008920 kernel: AES CTR mode by8 optimization enabled Jan 14 01:20:30.993856 systemd-networkd[726]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 01:20:30.994466 systemd-networkd[726]: eth0: Link UP Jan 14 01:20:30.995601 systemd-networkd[726]: eth0: Gained carrier Jan 14 01:20:30.995611 systemd-networkd[726]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:20:31.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:31.001401 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 14 01:20:31.028045 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 01:20:31.032933 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 01:20:31.033268 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:20:31.046495 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:20:31.057878 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:20:31.087597 disk-uuid[840]: Primary Header is updated. Jan 14 01:20:31.087597 disk-uuid[840]: Secondary Entries is updated. Jan 14 01:20:31.087597 disk-uuid[840]: Secondary Header is updated. Jan 14 01:20:31.077749 systemd-networkd[726]: eth0: DHCPv4 address 10.0.0.134/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 14 01:20:31.125869 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 14 01:20:31.189128 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 01:20:31.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:31.320453 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 01:20:31.325215 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 01:20:31.333996 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 01:20:31.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:31.339933 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 01:20:31.350049 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:20:31.379649 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 01:20:31.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:32.133325 disk-uuid[842]: Warning: The kernel is still using the old partition table. Jan 14 01:20:32.133325 disk-uuid[842]: The new table will be used at the next reboot or after you Jan 14 01:20:32.133325 disk-uuid[842]: run partprobe(8) or kpartx(8) Jan 14 01:20:32.133325 disk-uuid[842]: The operation has completed successfully. Jan 14 01:20:32.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:32.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:32.144602 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 01:20:32.144773 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 01:20:32.152076 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 01:20:32.204703 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (867) Jan 14 01:20:32.212847 kernel: BTRFS info (device vda6): first mount of filesystem 37f804f9-71c0-44d1-975c-4a397de322e7 Jan 14 01:20:32.212895 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 01:20:32.223575 kernel: BTRFS info (device vda6): turning on async discard Jan 14 01:20:32.223620 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 01:20:32.235629 kernel: BTRFS info (device vda6): last unmount of filesystem 37f804f9-71c0-44d1-975c-4a397de322e7 Jan 14 01:20:32.238146 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 01:20:32.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:32.239731 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 01:20:32.364734 ignition[886]: Ignition 2.24.0 Jan 14 01:20:32.364775 ignition[886]: Stage: fetch-offline Jan 14 01:20:32.364815 ignition[886]: no configs at "/usr/lib/ignition/base.d" Jan 14 01:20:32.364827 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 01:20:32.364903 ignition[886]: parsed url from cmdline: "" Jan 14 01:20:32.364907 ignition[886]: no config URL provided Jan 14 01:20:32.364912 ignition[886]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 01:20:32.364922 ignition[886]: no config at "/usr/lib/ignition/user.ign" Jan 14 01:20:32.364961 ignition[886]: op(1): [started] loading QEMU firmware config module Jan 14 01:20:32.364966 ignition[886]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 14 01:20:32.395682 ignition[886]: op(1): [finished] loading QEMU firmware config module Jan 14 01:20:32.654387 ignition[886]: parsing config with SHA512: 895a1155bd596a0f3bf4348f600561e5638404e3f2bba062e80b8a879738f9177b0a98e693fac134e5fa72fe18b4a76bd45b429bf0e32db86e21cdd399226759 Jan 14 01:20:32.661136 unknown[886]: fetched base config from "system" Jan 14 01:20:32.661146 unknown[886]: fetched user config from "qemu" Jan 14 01:20:32.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:32.661853 ignition[886]: fetch-offline: fetch-offline passed Jan 14 01:20:32.664315 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 01:20:32.661914 ignition[886]: Ignition finished successfully Jan 14 01:20:32.665025 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 14 01:20:32.666146 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 01:20:32.722445 ignition[897]: Ignition 2.24.0 Jan 14 01:20:32.722496 ignition[897]: Stage: kargs Jan 14 01:20:32.722703 ignition[897]: no configs at "/usr/lib/ignition/base.d" Jan 14 01:20:32.722714 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 01:20:32.723997 ignition[897]: kargs: kargs passed Jan 14 01:20:32.724044 ignition[897]: Ignition finished successfully Jan 14 01:20:32.743796 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 01:20:32.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:32.745267 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 01:20:32.792568 ignition[904]: Ignition 2.24.0 Jan 14 01:20:32.792600 ignition[904]: Stage: disks Jan 14 01:20:32.792734 ignition[904]: no configs at "/usr/lib/ignition/base.d" Jan 14 01:20:32.792744 ignition[904]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 01:20:32.793464 ignition[904]: disks: disks passed Jan 14 01:20:32.807882 ignition[904]: Ignition finished successfully Jan 14 01:20:32.810063 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 01:20:32.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:32.812008 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 01:20:32.820994 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 01:20:32.828901 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 01:20:32.829028 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 01:20:32.839977 systemd[1]: Reached target basic.target - Basic System. Jan 14 01:20:32.847783 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 01:20:32.897389 systemd-fsck[913]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 14 01:20:32.903462 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 01:20:32.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:32.904918 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 01:20:32.924685 systemd-networkd[726]: eth0: Gained IPv6LL Jan 14 01:20:33.411614 kernel: EXT4-fs (vda9): mounted filesystem 9c98b0a3-27fc-41c4-a169-349b38bd9ceb r/w with ordered data mode. Quota mode: none. Jan 14 01:20:33.412694 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 01:20:33.417557 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 01:20:33.428249 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 01:20:33.433383 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 01:20:33.447382 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 14 01:20:33.447448 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 01:20:33.447472 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 01:20:33.481770 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 01:20:33.489605 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (921) Jan 14 01:20:33.497609 kernel: BTRFS info (device vda6): first mount of filesystem 37f804f9-71c0-44d1-975c-4a397de322e7 Jan 14 01:20:33.497633 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 01:20:33.500714 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 01:20:33.512632 kernel: BTRFS info (device vda6): turning on async discard Jan 14 01:20:33.512652 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 01:20:33.513911 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 01:20:33.736035 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 01:20:33.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:33.741710 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 01:20:33.758130 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 01:20:33.829278 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 01:20:33.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:33.848587 kernel: BTRFS info (device vda6): last unmount of filesystem 37f804f9-71c0-44d1-975c-4a397de322e7 Jan 14 01:20:34.083314 ignition[1022]: INFO : Ignition 2.24.0 Jan 14 01:20:34.083314 ignition[1022]: INFO : Stage: mount Jan 14 01:20:34.091575 ignition[1022]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 01:20:34.091575 ignition[1022]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 01:20:34.102411 ignition[1022]: INFO : mount: mount passed Jan 14 01:20:34.102411 ignition[1022]: INFO : Ignition finished successfully Jan 14 01:20:34.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:34.103933 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 01:20:34.107961 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 01:20:34.412980 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 01:20:34.414863 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 01:20:34.455622 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1031) Jan 14 01:20:34.463008 kernel: BTRFS info (device vda6): first mount of filesystem 37f804f9-71c0-44d1-975c-4a397de322e7 Jan 14 01:20:34.463040 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 01:20:34.474680 kernel: BTRFS info (device vda6): turning on async discard Jan 14 01:20:34.474736 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 01:20:34.477771 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 01:20:34.543255 ignition[1048]: INFO : Ignition 2.24.0 Jan 14 01:20:34.543255 ignition[1048]: INFO : Stage: files Jan 14 01:20:34.550631 ignition[1048]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 01:20:34.550631 ignition[1048]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 01:20:34.558580 ignition[1048]: DEBUG : files: compiled without relabeling support, skipping Jan 14 01:20:34.564695 ignition[1048]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 01:20:34.570467 ignition[1048]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 01:20:34.576078 ignition[1048]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 01:20:34.576078 ignition[1048]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 01:20:34.576078 ignition[1048]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 01:20:34.576078 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 14 01:20:34.576078 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 14 01:20:34.572400 unknown[1048]: wrote ssh authorized keys file for user: core Jan 14 01:20:34.645642 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 14 01:20:34.750433 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 14 01:20:34.750433 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 14 01:20:34.765113 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 01:20:34.765113 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 14 01:20:34.765113 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 14 01:20:34.765113 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 01:20:34.791859 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 01:20:34.791859 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 01:20:34.791859 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 01:20:34.791859 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 01:20:34.791859 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 01:20:34.791859 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 14 01:20:34.791859 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 14 01:20:34.791859 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 14 01:20:34.791859 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 14 01:20:35.140000 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 14 01:20:36.205400 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 14 01:20:36.205400 ignition[1048]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 14 01:20:36.222717 ignition[1048]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 01:20:36.229870 ignition[1048]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 01:20:36.229870 ignition[1048]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 14 01:20:36.229870 ignition[1048]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 14 01:20:36.229870 ignition[1048]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 14 01:20:36.229870 ignition[1048]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 14 01:20:36.229870 ignition[1048]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 14 01:20:36.229870 ignition[1048]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 14 01:20:36.275926 ignition[1048]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 14 01:20:36.275926 ignition[1048]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 14 01:20:36.275926 ignition[1048]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 14 01:20:36.275926 ignition[1048]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 14 01:20:36.275926 ignition[1048]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 14 01:20:36.275926 ignition[1048]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 01:20:36.275926 ignition[1048]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 01:20:36.275926 ignition[1048]: INFO : files: files passed Jan 14 01:20:36.275926 ignition[1048]: INFO : Ignition finished successfully Jan 14 01:20:36.343674 kernel: kauditd_printk_skb: 15 callbacks suppressed Jan 14 01:20:36.343707 kernel: audit: type=1130 audit(1768353636.288:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:36.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:36.283697 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 01:20:36.290438 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 01:20:36.352481 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 01:20:36.353108 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 01:20:36.384120 kernel: audit: type=1130 audit(1768353636.360:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:36.384149 kernel: audit: type=1131 audit(1768353636.360:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:36.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:36.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:36.353305 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 01:20:36.403793 initrd-setup-root-after-ignition[1081]: grep: /sysroot/oem/oem-release: No such file or directory Jan 14 01:20:36.414830 initrd-setup-root-after-ignition[1083]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 01:20:36.414830 initrd-setup-root-after-ignition[1083]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 01:20:36.440044 kernel: audit: type=1130 audit(1768353636.420:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:36.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:36.440148 initrd-setup-root-after-ignition[1087]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 01:20:36.416713 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 01:20:36.421237 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 01:20:36.450832 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 01:20:36.543657 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 01:20:36.543851 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 01:20:36.573011 kernel: audit: type=1130 audit(1768353636.551:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:36.573048 kernel: audit: type=1131 audit(1768353636.551:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:36.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:36.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:36.552644 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 01:20:36.576931 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 01:20:36.577955 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 01:20:36.593665 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 01:20:36.638954 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 01:20:36.656232 kernel: audit: type=1130 audit(1768353636.638:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:36.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:36.642703 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 01:20:36.679265 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 01:20:36.679589 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 01:20:36.693313 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 01:20:36.698450 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 01:20:36.709739 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 01:20:36.709969 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 01:20:36.730598 kernel: audit: type=1131 audit(1768353636.713:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:36.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:36.730833 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 01:20:36.735305 systemd[1]: Stopped target basic.target - Basic System. Jan 14 01:20:36.745827 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 01:20:36.750797 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 01:20:36.755116 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 01:20:36.771712 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 14 01:20:36.776665 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 01:20:36.781472 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 01:20:36.797773 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 01:20:36.801816 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 01:20:36.812844 systemd[1]: Stopped target swap.target - Swaps. Jan 14 01:20:36.816051 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 01:20:36.834249 kernel: audit: type=1131 audit(1768353636.818:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:36.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:36.816282 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 01:20:36.834443 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 01:20:36.838497 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 01:20:36.845923 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 01:20:36.857642 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 01:20:36.862377 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 01:20:36.882675 kernel: audit: type=1131 audit(1768353636.866:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:36.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:36.862593 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 01:20:36.882809 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 01:20:36.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:36.882956 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 01:20:36.886998 systemd[1]: Stopped target paths.target - Path Units. Jan 14 01:20:36.897383 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 01:20:36.908795 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 01:20:36.909078 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 01:20:36.924637 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 01:20:36.928229 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 01:20:36.928347 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 01:20:36.931447 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 01:20:36.931676 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 01:20:36.937706 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 14 01:20:36.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:36.937802 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 14 01:20:36.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:36.944096 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 01:20:36.944318 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 01:20:36.958930 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 01:20:36.959119 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 01:20:36.985873 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 01:20:36.990473 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 01:20:36.993472 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 01:20:37.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:36.993671 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 01:20:37.007917 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 01:20:37.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:37.008027 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 01:20:37.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:37.021254 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 01:20:37.021383 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 01:20:37.043244 ignition[1107]: INFO : Ignition 2.24.0 Jan 14 01:20:37.043244 ignition[1107]: INFO : Stage: umount Jan 14 01:20:37.043244 ignition[1107]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 01:20:37.043244 ignition[1107]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 01:20:37.043244 ignition[1107]: INFO : umount: umount passed Jan 14 01:20:37.043244 ignition[1107]: INFO : Ignition finished successfully Jan 14 01:20:37.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:37.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:37.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:37.052339 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 01:20:37.053367 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 01:20:37.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:37.053575 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 01:20:37.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:37.064893 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 01:20:37.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:37.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:37.065053 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 01:20:37.070373 systemd[1]: Stopped target network.target - Network. Jan 14 01:20:37.077413 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 01:20:37.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:37.077485 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 01:20:37.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:37.084149 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 01:20:37.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:37.084265 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 01:20:37.087418 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 01:20:37.087473 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 01:20:37.100946 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 01:20:37.101043 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 01:20:37.107744 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 01:20:37.114709 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 01:20:37.118667 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 01:20:37.118834 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 01:20:37.125228 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 01:20:37.125338 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 01:20:37.131925 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 01:20:37.132130 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 01:20:37.192251 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 01:20:37.195000 audit: BPF prog-id=6 op=UNLOAD Jan 14 01:20:37.195913 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 01:20:37.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:37.206257 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 14 01:20:37.205000 audit: BPF prog-id=9 op=UNLOAD Jan 14 01:20:37.206471 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 01:20:37.206611 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 01:20:37.226934 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 01:20:37.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:37.230319 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 01:20:37.230429 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 01:20:37.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:37.238842 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 01:20:37.254000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:37.238909 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 01:20:37.243104 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 01:20:37.243264 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 01:20:37.262067 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 01:20:37.291486 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 01:20:37.295856 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 01:20:37.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:37.306968 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 01:20:37.307231 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 01:20:37.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:37.320666 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 01:20:37.320818 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 01:20:37.330144 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 01:20:37.330272 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 01:20:37.343590 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 01:20:37.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:37.343683 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 01:20:37.354694 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 01:20:37.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:37.354784 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 01:20:37.366598 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 01:20:37.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:37.366682 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 01:20:37.384949 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 01:20:37.385114 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 14 01:20:37.393000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:37.385250 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 01:20:37.393919 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 01:20:37.393989 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 01:20:37.403324 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 14 01:20:37.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:37.415000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:37.403378 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 01:20:37.415825 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 01:20:37.415884 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 01:20:37.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:37.432000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:37.424764 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 01:20:37.424814 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:20:37.451764 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 01:20:37.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:37.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:37.451872 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 01:20:37.462833 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 01:20:37.467859 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 01:20:37.518987 systemd[1]: Switching root. Jan 14 01:20:37.566049 systemd-journald[319]: Journal stopped Jan 14 01:20:39.274855 systemd-journald[319]: Received SIGTERM from PID 1 (systemd). Jan 14 01:20:39.274931 kernel: SELinux: policy capability network_peer_controls=1 Jan 14 01:20:39.274945 kernel: SELinux: policy capability open_perms=1 Jan 14 01:20:39.274960 kernel: SELinux: policy capability extended_socket_class=1 Jan 14 01:20:39.274975 kernel: SELinux: policy capability always_check_network=0 Jan 14 01:20:39.274987 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 14 01:20:39.275003 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 14 01:20:39.275016 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 14 01:20:39.275028 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 14 01:20:39.275038 kernel: SELinux: policy capability userspace_initial_context=0 Jan 14 01:20:39.275054 systemd[1]: Successfully loaded SELinux policy in 80.874ms. Jan 14 01:20:39.275068 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.898ms. Jan 14 01:20:39.275081 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 01:20:39.275093 systemd[1]: Detected virtualization kvm. Jan 14 01:20:39.275111 systemd[1]: Detected architecture x86-64. Jan 14 01:20:39.275123 systemd[1]: Detected first boot. Jan 14 01:20:39.275135 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 14 01:20:39.275147 zram_generator::config[1151]: No configuration found. Jan 14 01:20:39.275200 kernel: Guest personality initialized and is inactive Jan 14 01:20:39.275216 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 14 01:20:39.275230 kernel: Initialized host personality Jan 14 01:20:39.275242 kernel: NET: Registered PF_VSOCK protocol family Jan 14 01:20:39.275257 systemd[1]: Populated /etc with preset unit settings. Jan 14 01:20:39.275270 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 14 01:20:39.275281 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 14 01:20:39.275293 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 14 01:20:39.275309 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 14 01:20:39.275324 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 14 01:20:39.275336 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 14 01:20:39.275347 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 14 01:20:39.275359 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 14 01:20:39.275371 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 14 01:20:39.275383 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 14 01:20:39.275394 systemd[1]: Created slice user.slice - User and Session Slice. Jan 14 01:20:39.275408 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 01:20:39.275420 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 01:20:39.275431 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 14 01:20:39.275443 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 14 01:20:39.275455 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 14 01:20:39.275466 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 01:20:39.275480 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 14 01:20:39.275494 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 01:20:39.275563 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 01:20:39.275577 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 14 01:20:39.275588 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 14 01:20:39.275600 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 14 01:20:39.275611 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 14 01:20:39.275626 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 01:20:39.275638 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 01:20:39.275650 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 14 01:20:39.275662 systemd[1]: Reached target slices.target - Slice Units. Jan 14 01:20:39.275674 systemd[1]: Reached target swap.target - Swaps. Jan 14 01:20:39.275686 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 14 01:20:39.275697 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 14 01:20:39.275709 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 14 01:20:39.275722 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 01:20:39.275735 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 14 01:20:39.275747 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 01:20:39.275759 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 14 01:20:39.275770 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 14 01:20:39.275782 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 01:20:39.275795 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 01:20:39.275809 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 14 01:20:39.275820 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 14 01:20:39.275832 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 14 01:20:39.275844 systemd[1]: Mounting media.mount - External Media Directory... Jan 14 01:20:39.275855 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:20:39.275869 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 14 01:20:39.275881 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 14 01:20:39.275895 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 14 01:20:39.275907 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 14 01:20:39.275918 systemd[1]: Reached target machines.target - Containers. Jan 14 01:20:39.275931 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 14 01:20:39.275943 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 01:20:39.275955 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 01:20:39.275968 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 14 01:20:39.275980 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 01:20:39.275991 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 01:20:39.276003 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 01:20:39.276015 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 14 01:20:39.276027 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 01:20:39.276038 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 14 01:20:39.276053 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 14 01:20:39.276065 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 14 01:20:39.276076 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 14 01:20:39.276088 systemd[1]: Stopped systemd-fsck-usr.service. Jan 14 01:20:39.276099 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 01:20:39.276111 kernel: ACPI: bus type drm_connector registered Jan 14 01:20:39.276124 kernel: fuse: init (API version 7.41) Jan 14 01:20:39.276136 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 01:20:39.276147 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 01:20:39.276159 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 01:20:39.276212 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 14 01:20:39.276227 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 14 01:20:39.276239 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 01:20:39.276270 systemd-journald[1237]: Collecting audit messages is enabled. Jan 14 01:20:39.276292 systemd-journald[1237]: Journal started Jan 14 01:20:39.276311 systemd-journald[1237]: Runtime Journal (/run/log/journal/4d59ca1171374e269ccbe3939beadb1a) is 6M, max 48.2M, 42.1M free. Jan 14 01:20:38.906000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 14 01:20:39.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:39.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:39.203000 audit: BPF prog-id=14 op=UNLOAD Jan 14 01:20:39.203000 audit: BPF prog-id=13 op=UNLOAD Jan 14 01:20:39.209000 audit: BPF prog-id=15 op=LOAD Jan 14 01:20:39.210000 audit: BPF prog-id=16 op=LOAD Jan 14 01:20:39.210000 audit: BPF prog-id=17 op=LOAD Jan 14 01:20:39.272000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 14 01:20:39.272000 audit[1237]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffc9530e4b0 a2=4000 a3=0 items=0 ppid=1 pid=1237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:39.272000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 14 01:20:38.566362 systemd[1]: Queued start job for default target multi-user.target. Jan 14 01:20:38.592107 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 14 01:20:38.592979 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 14 01:20:39.284622 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:20:39.292606 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 01:20:39.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:39.298946 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 14 01:20:39.304045 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 14 01:20:39.309154 systemd[1]: Mounted media.mount - External Media Directory. Jan 14 01:20:39.313097 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 14 01:20:39.317288 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 14 01:20:39.321983 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 14 01:20:39.326424 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 14 01:20:39.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:39.331315 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 01:20:39.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:39.336260 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 14 01:20:39.336619 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 14 01:20:39.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:39.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:39.341244 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 01:20:39.341573 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 01:20:39.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:39.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:39.346044 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 01:20:39.346375 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 01:20:39.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:39.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:39.350723 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 01:20:39.350984 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 01:20:39.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:39.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:39.355928 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 14 01:20:39.356262 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 14 01:20:39.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:39.359000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:39.360691 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 01:20:39.360955 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 01:20:39.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:39.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:39.365320 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 01:20:39.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:39.369980 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 01:20:39.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:39.375896 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 14 01:20:39.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:39.381409 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 14 01:20:39.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:39.398277 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 01:20:39.403208 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 14 01:20:39.409618 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 14 01:20:39.414892 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 14 01:20:39.418976 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 14 01:20:39.419036 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 01:20:39.423707 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 14 01:20:39.428290 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 01:20:39.428451 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 01:20:39.430649 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 14 01:20:39.435751 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 14 01:20:39.440086 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 01:20:39.441336 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 14 01:20:39.445324 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 01:20:39.446432 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 01:20:39.452804 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 14 01:20:39.458141 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 01:20:39.467817 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 01:20:39.471465 systemd-journald[1237]: Time spent on flushing to /var/log/journal/4d59ca1171374e269ccbe3939beadb1a is 20.648ms for 1105 entries. Jan 14 01:20:39.471465 systemd-journald[1237]: System Journal (/var/log/journal/4d59ca1171374e269ccbe3939beadb1a) is 8M, max 163.5M, 155.5M free. Jan 14 01:20:39.515330 systemd-journald[1237]: Received client request to flush runtime journal. Jan 14 01:20:39.515382 kernel: loop1: detected capacity change from 0 to 50784 Jan 14 01:20:39.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:39.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:39.476842 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 14 01:20:39.481453 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 14 01:20:39.486153 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 14 01:20:39.498394 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 14 01:20:39.509796 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 14 01:20:39.518394 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 14 01:20:39.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:39.524267 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 01:20:39.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:39.529703 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Jan 14 01:20:39.530053 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Jan 14 01:20:39.537699 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 01:20:39.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:39.545154 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 14 01:20:39.551312 kernel: loop2: detected capacity change from 0 to 229808 Jan 14 01:20:39.560243 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 14 01:20:39.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:39.583584 kernel: loop3: detected capacity change from 0 to 111560 Jan 14 01:20:39.590363 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 14 01:20:39.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:39.596706 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 14 01:20:39.597000 audit: BPF prog-id=18 op=LOAD Jan 14 01:20:39.598000 audit: BPF prog-id=19 op=LOAD Jan 14 01:20:39.598000 audit: BPF prog-id=20 op=LOAD Jan 14 01:20:39.599444 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 14 01:20:39.604000 audit: BPF prog-id=21 op=LOAD Jan 14 01:20:39.606723 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 01:20:39.613827 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 01:20:39.635000 audit: BPF prog-id=22 op=LOAD Jan 14 01:20:39.635000 audit: BPF prog-id=23 op=LOAD Jan 14 01:20:39.635000 audit: BPF prog-id=24 op=LOAD Jan 14 01:20:39.636748 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 14 01:20:39.644000 audit: BPF prog-id=25 op=LOAD Jan 14 01:20:39.646587 kernel: loop4: detected capacity change from 0 to 50784 Jan 14 01:20:39.650803 systemd-tmpfiles[1296]: ACLs are not supported, ignoring. Jan 14 01:20:39.650000 audit: BPF prog-id=26 op=LOAD Jan 14 01:20:39.650000 audit: BPF prog-id=27 op=LOAD Jan 14 01:20:39.650845 systemd-tmpfiles[1296]: ACLs are not supported, ignoring. Jan 14 01:20:39.651478 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 14 01:20:39.657075 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 01:20:39.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:39.671595 kernel: loop5: detected capacity change from 0 to 229808 Jan 14 01:20:39.689581 kernel: loop6: detected capacity change from 0 to 111560 Jan 14 01:20:39.694003 systemd-nsresourced[1297]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 14 01:20:39.695340 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 14 01:20:39.701155 (sd-merge)[1299]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 14 01:20:39.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:39.707567 (sd-merge)[1299]: Merged extensions into '/usr'. Jan 14 01:20:39.712917 systemd[1]: Reload requested from client PID 1271 ('systemd-sysext') (unit systemd-sysext.service)... Jan 14 01:20:39.713008 systemd[1]: Reloading... Jan 14 01:20:39.782602 zram_generator::config[1343]: No configuration found. Jan 14 01:20:39.825420 systemd-oomd[1293]: No swap; memory pressure usage will be degraded Jan 14 01:20:39.831890 systemd-resolved[1295]: Positive Trust Anchors: Jan 14 01:20:39.831936 systemd-resolved[1295]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 01:20:39.831941 systemd-resolved[1295]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 01:20:39.831967 systemd-resolved[1295]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 01:20:39.838447 systemd-resolved[1295]: Defaulting to hostname 'linux'. Jan 14 01:20:39.994671 systemd[1]: Reloading finished in 281 ms. Jan 14 01:20:40.037413 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 14 01:20:40.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:40.042400 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 14 01:20:40.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:40.047737 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 01:20:40.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:40.052450 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 14 01:20:40.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:40.057768 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 14 01:20:40.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:40.069002 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 01:20:40.095276 systemd[1]: Starting ensure-sysext.service... Jan 14 01:20:40.099902 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 01:20:40.104000 audit: BPF prog-id=8 op=UNLOAD Jan 14 01:20:40.104000 audit: BPF prog-id=7 op=UNLOAD Jan 14 01:20:40.104000 audit: BPF prog-id=28 op=LOAD Jan 14 01:20:40.104000 audit: BPF prog-id=29 op=LOAD Jan 14 01:20:40.106346 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 01:20:40.126000 audit: BPF prog-id=30 op=LOAD Jan 14 01:20:40.126000 audit: BPF prog-id=21 op=UNLOAD Jan 14 01:20:40.127000 audit: BPF prog-id=31 op=LOAD Jan 14 01:20:40.127000 audit: BPF prog-id=22 op=UNLOAD Jan 14 01:20:40.127000 audit: BPF prog-id=32 op=LOAD Jan 14 01:20:40.127000 audit: BPF prog-id=33 op=LOAD Jan 14 01:20:40.127000 audit: BPF prog-id=23 op=UNLOAD Jan 14 01:20:40.127000 audit: BPF prog-id=24 op=UNLOAD Jan 14 01:20:40.128000 audit: BPF prog-id=34 op=LOAD Jan 14 01:20:40.128000 audit: BPF prog-id=25 op=UNLOAD Jan 14 01:20:40.128000 audit: BPF prog-id=35 op=LOAD Jan 14 01:20:40.129000 audit: BPF prog-id=36 op=LOAD Jan 14 01:20:40.129000 audit: BPF prog-id=26 op=UNLOAD Jan 14 01:20:40.129000 audit: BPF prog-id=27 op=UNLOAD Jan 14 01:20:40.130000 audit: BPF prog-id=37 op=LOAD Jan 14 01:20:40.130000 audit: BPF prog-id=15 op=UNLOAD Jan 14 01:20:40.130000 audit: BPF prog-id=38 op=LOAD Jan 14 01:20:40.130000 audit: BPF prog-id=39 op=LOAD Jan 14 01:20:40.130000 audit: BPF prog-id=16 op=UNLOAD Jan 14 01:20:40.130000 audit: BPF prog-id=17 op=UNLOAD Jan 14 01:20:40.134000 audit: BPF prog-id=40 op=LOAD Jan 14 01:20:40.134000 audit: BPF prog-id=18 op=UNLOAD Jan 14 01:20:40.134000 audit: BPF prog-id=41 op=LOAD Jan 14 01:20:40.134000 audit: BPF prog-id=42 op=LOAD Jan 14 01:20:40.134000 audit: BPF prog-id=19 op=UNLOAD Jan 14 01:20:40.134000 audit: BPF prog-id=20 op=UNLOAD Jan 14 01:20:40.141405 systemd[1]: Reload requested from client PID 1381 ('systemctl') (unit ensure-sysext.service)... Jan 14 01:20:40.141455 systemd[1]: Reloading... Jan 14 01:20:40.145928 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 14 01:20:40.145988 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 14 01:20:40.146494 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 14 01:20:40.148474 systemd-tmpfiles[1382]: ACLs are not supported, ignoring. Jan 14 01:20:40.149014 systemd-tmpfiles[1382]: ACLs are not supported, ignoring. Jan 14 01:20:40.158013 systemd-tmpfiles[1382]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 01:20:40.158097 systemd-tmpfiles[1382]: Skipping /boot Jan 14 01:20:40.171128 systemd-udevd[1383]: Using default interface naming scheme 'v257'. Jan 14 01:20:40.176032 systemd-tmpfiles[1382]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 01:20:40.176044 systemd-tmpfiles[1382]: Skipping /boot Jan 14 01:20:40.207665 zram_generator::config[1415]: No configuration found. Jan 14 01:20:40.320698 kernel: mousedev: PS/2 mouse device common for all mice Jan 14 01:20:40.337610 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 14 01:20:40.348777 kernel: ACPI: button: Power Button [PWRF] Jan 14 01:20:40.354890 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 14 01:20:40.356782 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 14 01:20:40.488678 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 14 01:20:40.494884 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 14 01:20:40.495723 systemd[1]: Reloading finished in 353 ms. Jan 14 01:20:40.571746 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 01:20:40.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:40.584000 audit: BPF prog-id=43 op=LOAD Jan 14 01:20:40.585000 audit: BPF prog-id=44 op=LOAD Jan 14 01:20:40.585000 audit: BPF prog-id=28 op=UNLOAD Jan 14 01:20:40.587000 audit: BPF prog-id=29 op=UNLOAD Jan 14 01:20:40.589000 audit: BPF prog-id=45 op=LOAD Jan 14 01:20:40.589000 audit: BPF prog-id=40 op=UNLOAD Jan 14 01:20:40.589000 audit: BPF prog-id=46 op=LOAD Jan 14 01:20:40.589000 audit: BPF prog-id=47 op=LOAD Jan 14 01:20:40.589000 audit: BPF prog-id=41 op=UNLOAD Jan 14 01:20:40.589000 audit: BPF prog-id=42 op=UNLOAD Jan 14 01:20:40.589000 audit: BPF prog-id=48 op=LOAD Jan 14 01:20:40.589000 audit: BPF prog-id=34 op=UNLOAD Jan 14 01:20:40.591000 audit: BPF prog-id=49 op=LOAD Jan 14 01:20:40.591000 audit: BPF prog-id=50 op=LOAD Jan 14 01:20:40.591000 audit: BPF prog-id=35 op=UNLOAD Jan 14 01:20:40.591000 audit: BPF prog-id=36 op=UNLOAD Jan 14 01:20:40.591000 audit: BPF prog-id=51 op=LOAD Jan 14 01:20:40.591000 audit: BPF prog-id=30 op=UNLOAD Jan 14 01:20:40.593587 kernel: kvm_amd: TSC scaling supported Jan 14 01:20:40.593689 kernel: kvm_amd: Nested Virtualization enabled Jan 14 01:20:40.593709 kernel: kvm_amd: Nested Paging enabled Jan 14 01:20:40.593724 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 14 01:20:40.593000 audit: BPF prog-id=52 op=LOAD Jan 14 01:20:40.593000 audit: BPF prog-id=31 op=UNLOAD Jan 14 01:20:40.593000 audit: BPF prog-id=53 op=LOAD Jan 14 01:20:40.593000 audit: BPF prog-id=54 op=LOAD Jan 14 01:20:40.593000 audit: BPF prog-id=32 op=UNLOAD Jan 14 01:20:40.593000 audit: BPF prog-id=33 op=UNLOAD Jan 14 01:20:40.595000 audit: BPF prog-id=55 op=LOAD Jan 14 01:20:40.595000 audit: BPF prog-id=37 op=UNLOAD Jan 14 01:20:40.595000 audit: BPF prog-id=56 op=LOAD Jan 14 01:20:40.595000 audit: BPF prog-id=57 op=LOAD Jan 14 01:20:40.595000 audit: BPF prog-id=38 op=UNLOAD Jan 14 01:20:40.595000 audit: BPF prog-id=39 op=UNLOAD Jan 14 01:20:40.599803 kernel: kvm_amd: PMU virtualization is disabled Jan 14 01:20:40.638261 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 01:20:40.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:40.669669 kernel: EDAC MC: Ver: 3.0.0 Jan 14 01:20:40.676806 systemd[1]: Finished ensure-sysext.service. Jan 14 01:20:40.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:40.699389 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:20:40.701017 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 01:20:40.706340 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 14 01:20:40.710679 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 01:20:40.712025 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 01:20:40.721777 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 01:20:40.727127 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 01:20:40.735346 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 01:20:40.745025 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 01:20:40.745675 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 01:20:40.748705 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 14 01:20:40.755865 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 14 01:20:40.761269 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 01:20:40.762912 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 14 01:20:40.772000 audit: BPF prog-id=58 op=LOAD Jan 14 01:20:40.775212 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 01:20:40.779000 audit: BPF prog-id=59 op=LOAD Jan 14 01:20:40.781373 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 14 01:20:40.789688 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 14 01:20:40.806753 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:20:40.806000 audit[1521]: SYSTEM_BOOT pid=1521 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 14 01:20:40.810732 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:20:40.812671 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 01:20:40.814912 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 01:20:40.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:40.819000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:40.820082 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 01:20:40.820407 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 01:20:40.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:40.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:40.825391 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 01:20:40.825672 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 01:20:40.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:40.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:40.833391 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 01:20:40.833960 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 01:20:40.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:40.838000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:40.840798 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 14 01:20:40.841000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 14 01:20:40.841000 audit[1531]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdcf4d1140 a2=420 a3=0 items=0 ppid=1497 pid=1531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:40.841000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 01:20:40.842652 augenrules[1531]: No rules Jan 14 01:20:40.846242 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 01:20:40.846574 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 01:20:40.847255 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 14 01:20:40.852133 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 14 01:20:40.872896 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 14 01:20:40.876268 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 01:20:40.876361 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 01:20:40.876383 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 14 01:20:40.944713 systemd-networkd[1516]: lo: Link UP Jan 14 01:20:40.944727 systemd-networkd[1516]: lo: Gained carrier Jan 14 01:20:40.946896 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 01:20:40.946932 systemd-networkd[1516]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:20:40.946938 systemd-networkd[1516]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 01:20:40.950833 systemd-networkd[1516]: eth0: Link UP Jan 14 01:20:40.952907 systemd-networkd[1516]: eth0: Gained carrier Jan 14 01:20:40.952948 systemd-networkd[1516]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:20:40.977612 systemd-networkd[1516]: eth0: DHCPv4 address 10.0.0.134/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 14 01:20:40.978679 systemd-timesyncd[1520]: Network configuration changed, trying to establish connection. Jan 14 01:20:40.979864 systemd-timesyncd[1520]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 14 01:20:40.979962 systemd-timesyncd[1520]: Initial clock synchronization to Wed 2026-01-14 01:20:41.252247 UTC. Jan 14 01:20:41.146209 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 14 01:20:41.156902 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:20:41.165707 systemd[1]: Reached target network.target - Network. Jan 14 01:20:41.170097 systemd[1]: Reached target time-set.target - System Time Set. Jan 14 01:20:41.177115 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 14 01:20:41.183804 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 14 01:20:41.209805 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 14 01:20:41.326129 ldconfig[1510]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 14 01:20:41.333455 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 14 01:20:41.340479 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 14 01:20:41.380882 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 14 01:20:41.385607 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 01:20:41.389828 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 14 01:20:41.394504 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 14 01:20:41.399458 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 14 01:20:41.404290 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 14 01:20:41.408448 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 14 01:20:41.414393 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 14 01:20:41.419890 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 14 01:20:41.424714 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 14 01:20:41.430054 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 14 01:20:41.430097 systemd[1]: Reached target paths.target - Path Units. Jan 14 01:20:41.433984 systemd[1]: Reached target timers.target - Timer Units. Jan 14 01:20:41.438835 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 14 01:20:41.444689 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 14 01:20:41.450233 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 14 01:20:41.454937 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 14 01:20:41.459452 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 14 01:20:41.465720 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 14 01:20:41.470259 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 14 01:20:41.476109 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 14 01:20:41.481770 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 01:20:41.485732 systemd[1]: Reached target basic.target - Basic System. Jan 14 01:20:41.489350 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 14 01:20:41.489409 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 14 01:20:41.490795 systemd[1]: Starting containerd.service - containerd container runtime... Jan 14 01:20:41.496087 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 14 01:20:41.512676 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 14 01:20:41.517937 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 14 01:20:41.531748 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 14 01:20:41.535232 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 14 01:20:41.536616 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 14 01:20:41.542076 jq[1565]: false Jan 14 01:20:41.542763 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 14 01:20:41.550669 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 14 01:20:41.550995 google_oslogin_nss_cache[1567]: oslogin_cache_refresh[1567]: Refreshing passwd entry cache Jan 14 01:20:41.551218 oslogin_cache_refresh[1567]: Refreshing passwd entry cache Jan 14 01:20:41.554201 extend-filesystems[1566]: Found /dev/vda6 Jan 14 01:20:41.558898 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 14 01:20:41.560719 extend-filesystems[1566]: Found /dev/vda9 Jan 14 01:20:41.568968 extend-filesystems[1566]: Checking size of /dev/vda9 Jan 14 01:20:41.574622 extend-filesystems[1566]: Resized partition /dev/vda9 Jan 14 01:20:41.577034 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 14 01:20:41.582167 extend-filesystems[1585]: resize2fs 1.47.3 (8-Jul-2025) Jan 14 01:20:41.595160 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 14 01:20:41.602187 google_oslogin_nss_cache[1567]: oslogin_cache_refresh[1567]: Failure getting users, quitting Jan 14 01:20:41.602187 google_oslogin_nss_cache[1567]: oslogin_cache_refresh[1567]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 14 01:20:41.602187 google_oslogin_nss_cache[1567]: oslogin_cache_refresh[1567]: Refreshing group entry cache Jan 14 01:20:41.587718 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 14 01:20:41.582285 oslogin_cache_refresh[1567]: Failure getting users, quitting Jan 14 01:20:41.601865 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 14 01:20:41.582312 oslogin_cache_refresh[1567]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 14 01:20:41.582430 oslogin_cache_refresh[1567]: Refreshing group entry cache Jan 14 01:20:41.603488 google_oslogin_nss_cache[1567]: oslogin_cache_refresh[1567]: Failure getting groups, quitting Jan 14 01:20:41.603488 google_oslogin_nss_cache[1567]: oslogin_cache_refresh[1567]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 14 01:20:41.603477 oslogin_cache_refresh[1567]: Failure getting groups, quitting Jan 14 01:20:41.603497 oslogin_cache_refresh[1567]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 14 01:20:41.605198 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 14 01:20:41.607803 systemd[1]: Starting update-engine.service - Update Engine... Jan 14 01:20:41.616082 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 14 01:20:41.628252 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 14 01:20:41.634231 jq[1589]: true Jan 14 01:20:41.635290 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 14 01:20:41.635900 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 14 01:20:41.636433 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 14 01:20:41.636936 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 14 01:20:41.641729 systemd[1]: motdgen.service: Deactivated successfully. Jan 14 01:20:41.642144 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 14 01:20:41.651393 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 14 01:20:41.651998 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 14 01:20:41.659829 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 14 01:20:41.686297 update_engine[1588]: I20260114 01:20:41.662313 1588 main.cc:92] Flatcar Update Engine starting Jan 14 01:20:41.686985 jq[1600]: true Jan 14 01:20:41.688085 extend-filesystems[1585]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 14 01:20:41.688085 extend-filesystems[1585]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 14 01:20:41.688085 extend-filesystems[1585]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 14 01:20:41.700596 extend-filesystems[1566]: Resized filesystem in /dev/vda9 Jan 14 01:20:41.691699 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 14 01:20:41.696659 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 14 01:20:41.733832 tar[1597]: linux-amd64/LICENSE Jan 14 01:20:41.734092 tar[1597]: linux-amd64/helm Jan 14 01:20:41.750090 dbus-daemon[1563]: [system] SELinux support is enabled Jan 14 01:20:41.750710 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 14 01:20:41.758012 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 14 01:20:41.758070 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 14 01:20:41.762817 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 14 01:20:41.762865 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 14 01:20:41.766589 update_engine[1588]: I20260114 01:20:41.765641 1588 update_check_scheduler.cc:74] Next update check in 10m42s Jan 14 01:20:41.768722 systemd[1]: Started update-engine.service - Update Engine. Jan 14 01:20:41.781164 systemd-logind[1587]: Watching system buttons on /dev/input/event2 (Power Button) Jan 14 01:20:41.781226 systemd-logind[1587]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 14 01:20:41.781652 systemd-logind[1587]: New seat seat0. Jan 14 01:20:41.781831 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 14 01:20:41.786912 systemd[1]: Started systemd-logind.service - User Login Management. Jan 14 01:20:41.793170 bash[1632]: Updated "/home/core/.ssh/authorized_keys" Jan 14 01:20:41.794874 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 14 01:20:41.801767 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 14 01:20:41.875275 locksmithd[1633]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 14 01:20:41.880966 sshd_keygen[1596]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 14 01:20:41.915904 containerd[1611]: time="2026-01-14T01:20:41Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 14 01:20:41.916307 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 14 01:20:41.917585 containerd[1611]: time="2026-01-14T01:20:41.917319299Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 14 01:20:41.924129 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 14 01:20:41.927345 containerd[1611]: time="2026-01-14T01:20:41.927319071Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.501µs" Jan 14 01:20:41.927494 containerd[1611]: time="2026-01-14T01:20:41.927478138Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 14 01:20:41.927969 containerd[1611]: time="2026-01-14T01:20:41.927951359Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 14 01:20:41.929606 containerd[1611]: time="2026-01-14T01:20:41.928017569Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 14 01:20:41.929606 containerd[1611]: time="2026-01-14T01:20:41.928171425Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 14 01:20:41.929606 containerd[1611]: time="2026-01-14T01:20:41.928185773Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 01:20:41.929606 containerd[1611]: time="2026-01-14T01:20:41.928243965Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 01:20:41.929606 containerd[1611]: time="2026-01-14T01:20:41.928254656Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 01:20:41.929606 containerd[1611]: time="2026-01-14T01:20:41.928457534Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 01:20:41.929606 containerd[1611]: time="2026-01-14T01:20:41.928470681Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 01:20:41.929606 containerd[1611]: time="2026-01-14T01:20:41.928479808Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 01:20:41.929606 containerd[1611]: time="2026-01-14T01:20:41.928487319Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 01:20:41.929606 containerd[1611]: time="2026-01-14T01:20:41.928793579Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 01:20:41.929606 containerd[1611]: time="2026-01-14T01:20:41.928807617Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 14 01:20:41.929606 containerd[1611]: time="2026-01-14T01:20:41.928894163Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 14 01:20:41.929924 containerd[1611]: time="2026-01-14T01:20:41.929098295Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 01:20:41.929924 containerd[1611]: time="2026-01-14T01:20:41.929128236Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 01:20:41.929924 containerd[1611]: time="2026-01-14T01:20:41.929136876Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 14 01:20:41.929924 containerd[1611]: time="2026-01-14T01:20:41.929162299Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 14 01:20:41.929924 containerd[1611]: time="2026-01-14T01:20:41.929312767Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 14 01:20:41.929924 containerd[1611]: time="2026-01-14T01:20:41.929372150Z" level=info msg="metadata content store policy set" policy=shared Jan 14 01:20:41.936944 containerd[1611]: time="2026-01-14T01:20:41.936693622Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 14 01:20:41.936944 containerd[1611]: time="2026-01-14T01:20:41.936749877Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 01:20:41.936944 containerd[1611]: time="2026-01-14T01:20:41.936857091Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 01:20:41.936944 containerd[1611]: time="2026-01-14T01:20:41.936881241Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 14 01:20:41.936944 containerd[1611]: time="2026-01-14T01:20:41.936899504Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 14 01:20:41.936944 containerd[1611]: time="2026-01-14T01:20:41.936913988Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 14 01:20:41.936944 containerd[1611]: time="2026-01-14T01:20:41.936927642Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 14 01:20:41.937148 containerd[1611]: time="2026-01-14T01:20:41.936951740Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 14 01:20:41.937148 containerd[1611]: time="2026-01-14T01:20:41.936967477Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 14 01:20:41.937148 containerd[1611]: time="2026-01-14T01:20:41.936984570Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 14 01:20:41.937148 containerd[1611]: time="2026-01-14T01:20:41.937000846Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 14 01:20:41.937148 containerd[1611]: time="2026-01-14T01:20:41.937081519Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 14 01:20:41.937148 containerd[1611]: time="2026-01-14T01:20:41.937099368Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 14 01:20:41.937238 containerd[1611]: time="2026-01-14T01:20:41.937157177Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 14 01:20:41.937345 containerd[1611]: time="2026-01-14T01:20:41.937307064Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 14 01:20:41.937370 containerd[1611]: time="2026-01-14T01:20:41.937340600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 14 01:20:41.937370 containerd[1611]: time="2026-01-14T01:20:41.937364055Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 14 01:20:41.937410 containerd[1611]: time="2026-01-14T01:20:41.937379605Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 14 01:20:41.937410 containerd[1611]: time="2026-01-14T01:20:41.937393880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 14 01:20:41.937410 containerd[1611]: time="2026-01-14T01:20:41.937406738Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 14 01:20:41.937465 containerd[1611]: time="2026-01-14T01:20:41.937422246Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 14 01:20:41.937465 containerd[1611]: time="2026-01-14T01:20:41.937435983Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 14 01:20:41.937465 containerd[1611]: time="2026-01-14T01:20:41.937449275Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 14 01:20:41.937511 containerd[1611]: time="2026-01-14T01:20:41.937462463Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 14 01:20:41.937511 containerd[1611]: time="2026-01-14T01:20:41.937477703Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 14 01:20:41.937620 containerd[1611]: time="2026-01-14T01:20:41.937509726Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 14 01:20:41.937841 containerd[1611]: time="2026-01-14T01:20:41.937800995Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 14 01:20:41.937867 containerd[1611]: time="2026-01-14T01:20:41.937842891Z" level=info msg="Start snapshots syncer" Jan 14 01:20:41.937950 containerd[1611]: time="2026-01-14T01:20:41.937896855Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 14 01:20:41.938423 containerd[1611]: time="2026-01-14T01:20:41.938346259Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 14 01:20:41.938640 containerd[1611]: time="2026-01-14T01:20:41.938426682Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 14 01:20:41.938640 containerd[1611]: time="2026-01-14T01:20:41.938480015Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 14 01:20:41.938792 containerd[1611]: time="2026-01-14T01:20:41.938746669Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 14 01:20:41.938816 containerd[1611]: time="2026-01-14T01:20:41.938795402Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 14 01:20:41.938816 containerd[1611]: time="2026-01-14T01:20:41.938806032Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 14 01:20:41.938849 containerd[1611]: time="2026-01-14T01:20:41.938815821Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 14 01:20:41.938849 containerd[1611]: time="2026-01-14T01:20:41.938831134Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 14 01:20:41.938849 containerd[1611]: time="2026-01-14T01:20:41.938841856Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 14 01:20:41.938902 containerd[1611]: time="2026-01-14T01:20:41.938851273Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 14 01:20:41.938902 containerd[1611]: time="2026-01-14T01:20:41.938859540Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 14 01:20:41.938902 containerd[1611]: time="2026-01-14T01:20:41.938873443Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 14 01:20:41.938989 containerd[1611]: time="2026-01-14T01:20:41.938944181Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 01:20:41.939202 containerd[1611]: time="2026-01-14T01:20:41.939134099Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 01:20:41.939202 containerd[1611]: time="2026-01-14T01:20:41.939177777Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 01:20:41.939202 containerd[1611]: time="2026-01-14T01:20:41.939189525Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 01:20:41.939202 containerd[1611]: time="2026-01-14T01:20:41.939197408Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 14 01:20:41.939296 containerd[1611]: time="2026-01-14T01:20:41.939209385Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 14 01:20:41.939296 containerd[1611]: time="2026-01-14T01:20:41.939220315Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 14 01:20:41.939296 containerd[1611]: time="2026-01-14T01:20:41.939233751Z" level=info msg="runtime interface created" Jan 14 01:20:41.939296 containerd[1611]: time="2026-01-14T01:20:41.939238807Z" level=info msg="created NRI interface" Jan 14 01:20:41.939296 containerd[1611]: time="2026-01-14T01:20:41.939246329Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 14 01:20:41.939296 containerd[1611]: time="2026-01-14T01:20:41.939257134Z" level=info msg="Connect containerd service" Jan 14 01:20:41.939296 containerd[1611]: time="2026-01-14T01:20:41.939275544Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 14 01:20:41.941686 containerd[1611]: time="2026-01-14T01:20:41.941320810Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 01:20:41.944967 systemd[1]: issuegen.service: Deactivated successfully. Jan 14 01:20:41.945454 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 14 01:20:41.952409 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 14 01:20:41.987609 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 14 01:20:41.994448 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 14 01:20:41.999768 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 14 01:20:42.004402 systemd[1]: Reached target getty.target - Login Prompts. Jan 14 01:20:42.051191 containerd[1611]: time="2026-01-14T01:20:42.051154026Z" level=info msg="Start subscribing containerd event" Jan 14 01:20:42.051372 containerd[1611]: time="2026-01-14T01:20:42.051346031Z" level=info msg="Start recovering state" Jan 14 01:20:42.051697 containerd[1611]: time="2026-01-14T01:20:42.051509889Z" level=info msg="Start event monitor" Jan 14 01:20:42.051753 containerd[1611]: time="2026-01-14T01:20:42.051697922Z" level=info msg="Start cni network conf syncer for default" Jan 14 01:20:42.051753 containerd[1611]: time="2026-01-14T01:20:42.051707879Z" level=info msg="Start streaming server" Jan 14 01:20:42.051753 containerd[1611]: time="2026-01-14T01:20:42.051718072Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 14 01:20:42.051753 containerd[1611]: time="2026-01-14T01:20:42.051725964Z" level=info msg="runtime interface starting up..." Jan 14 01:20:42.051753 containerd[1611]: time="2026-01-14T01:20:42.051731340Z" level=info msg="starting plugins..." Jan 14 01:20:42.051835 containerd[1611]: time="2026-01-14T01:20:42.051747589Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 14 01:20:42.051981 containerd[1611]: time="2026-01-14T01:20:42.051210047Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 14 01:20:42.052185 containerd[1611]: time="2026-01-14T01:20:42.052143124Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 14 01:20:42.052318 containerd[1611]: time="2026-01-14T01:20:42.052279189Z" level=info msg="containerd successfully booted in 0.137031s" Jan 14 01:20:42.052655 systemd[1]: Started containerd.service - containerd container runtime. Jan 14 01:20:42.104162 tar[1597]: linux-amd64/README.md Jan 14 01:20:42.130034 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 14 01:20:42.716977 systemd-networkd[1516]: eth0: Gained IPv6LL Jan 14 01:20:42.720234 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 14 01:20:42.728526 systemd[1]: Reached target network-online.target - Network is Online. Jan 14 01:20:42.737913 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 14 01:20:42.744314 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:20:42.760479 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 14 01:20:42.796842 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 14 01:20:42.802431 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 14 01:20:42.802943 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 14 01:20:42.809918 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 14 01:20:43.694311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:20:43.699290 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 14 01:20:43.703601 systemd[1]: Startup finished in 3.763s (kernel) + 8.758s (initrd) + 5.998s (userspace) = 18.520s. Jan 14 01:20:43.723032 (kubelet)[1704]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:20:44.262670 kubelet[1704]: E0114 01:20:44.262423 1704 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:20:44.267761 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:20:44.268063 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:20:44.268955 systemd[1]: kubelet.service: Consumed 985ms CPU time, 268.5M memory peak. Jan 14 01:20:51.501143 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 14 01:20:51.504235 systemd[1]: Started sshd@0-10.0.0.134:22-10.0.0.1:59390.service - OpenSSH per-connection server daemon (10.0.0.1:59390). Jan 14 01:20:51.612778 sshd[1717]: Accepted publickey for core from 10.0.0.1 port 59390 ssh2: RSA SHA256:3qGrMVfuhKNIe5rlCK8c/D9IY3u9YaQGWBapsCdNUS0 Jan 14 01:20:51.615832 sshd-session[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:20:51.625499 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 14 01:20:51.626842 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 14 01:20:51.634748 systemd-logind[1587]: New session 1 of user core. Jan 14 01:20:51.659329 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 14 01:20:51.663151 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 14 01:20:51.692413 (systemd)[1723]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:20:51.696580 systemd-logind[1587]: New session 2 of user core. Jan 14 01:20:51.878131 systemd[1723]: Queued start job for default target default.target. Jan 14 01:20:51.890370 systemd[1723]: Created slice app.slice - User Application Slice. Jan 14 01:20:51.890442 systemd[1723]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 14 01:20:51.890457 systemd[1723]: Reached target paths.target - Paths. Jan 14 01:20:51.890613 systemd[1723]: Reached target timers.target - Timers. Jan 14 01:20:51.892689 systemd[1723]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 14 01:20:51.893942 systemd[1723]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 14 01:20:51.909324 systemd[1723]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 14 01:20:51.909894 systemd[1723]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 14 01:20:51.910090 systemd[1723]: Reached target sockets.target - Sockets. Jan 14 01:20:51.910132 systemd[1723]: Reached target basic.target - Basic System. Jan 14 01:20:51.910199 systemd[1723]: Reached target default.target - Main User Target. Jan 14 01:20:51.910256 systemd[1723]: Startup finished in 205ms. Jan 14 01:20:51.910376 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 14 01:20:51.913021 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 14 01:20:51.947120 systemd[1]: Started sshd@1-10.0.0.134:22-10.0.0.1:59392.service - OpenSSH per-connection server daemon (10.0.0.1:59392). Jan 14 01:20:52.036686 sshd[1737]: Accepted publickey for core from 10.0.0.1 port 59392 ssh2: RSA SHA256:3qGrMVfuhKNIe5rlCK8c/D9IY3u9YaQGWBapsCdNUS0 Jan 14 01:20:52.038947 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:20:52.046403 systemd-logind[1587]: New session 3 of user core. Jan 14 01:20:52.061813 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 14 01:20:52.079894 sshd[1741]: Connection closed by 10.0.0.1 port 59392 Jan 14 01:20:52.080506 sshd-session[1737]: pam_unix(sshd:session): session closed for user core Jan 14 01:20:52.097841 systemd[1]: sshd@1-10.0.0.134:22-10.0.0.1:59392.service: Deactivated successfully. Jan 14 01:20:52.100371 systemd[1]: session-3.scope: Deactivated successfully. Jan 14 01:20:52.101945 systemd-logind[1587]: Session 3 logged out. Waiting for processes to exit. Jan 14 01:20:52.105347 systemd[1]: Started sshd@2-10.0.0.134:22-10.0.0.1:59396.service - OpenSSH per-connection server daemon (10.0.0.1:59396). Jan 14 01:20:52.106647 systemd-logind[1587]: Removed session 3. Jan 14 01:20:52.187983 sshd[1747]: Accepted publickey for core from 10.0.0.1 port 59396 ssh2: RSA SHA256:3qGrMVfuhKNIe5rlCK8c/D9IY3u9YaQGWBapsCdNUS0 Jan 14 01:20:52.190732 sshd-session[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:20:52.199211 systemd-logind[1587]: New session 4 of user core. Jan 14 01:20:52.209842 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 14 01:20:52.223753 sshd[1752]: Connection closed by 10.0.0.1 port 59396 Jan 14 01:20:52.224395 sshd-session[1747]: pam_unix(sshd:session): session closed for user core Jan 14 01:20:52.234336 systemd[1]: sshd@2-10.0.0.134:22-10.0.0.1:59396.service: Deactivated successfully. Jan 14 01:20:52.236893 systemd[1]: session-4.scope: Deactivated successfully. Jan 14 01:20:52.238237 systemd-logind[1587]: Session 4 logged out. Waiting for processes to exit. Jan 14 01:20:52.241883 systemd[1]: Started sshd@3-10.0.0.134:22-10.0.0.1:59400.service - OpenSSH per-connection server daemon (10.0.0.1:59400). Jan 14 01:20:52.243142 systemd-logind[1587]: Removed session 4. Jan 14 01:20:52.314427 sshd[1758]: Accepted publickey for core from 10.0.0.1 port 59400 ssh2: RSA SHA256:3qGrMVfuhKNIe5rlCK8c/D9IY3u9YaQGWBapsCdNUS0 Jan 14 01:20:52.316900 sshd-session[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:20:52.324061 systemd-logind[1587]: New session 5 of user core. Jan 14 01:20:52.337838 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 14 01:20:52.357812 sshd[1762]: Connection closed by 10.0.0.1 port 59400 Jan 14 01:20:52.358052 sshd-session[1758]: pam_unix(sshd:session): session closed for user core Jan 14 01:20:52.375592 systemd[1]: sshd@3-10.0.0.134:22-10.0.0.1:59400.service: Deactivated successfully. Jan 14 01:20:52.377442 systemd[1]: session-5.scope: Deactivated successfully. Jan 14 01:20:52.378887 systemd-logind[1587]: Session 5 logged out. Waiting for processes to exit. Jan 14 01:20:52.381922 systemd[1]: Started sshd@4-10.0.0.134:22-10.0.0.1:52664.service - OpenSSH per-connection server daemon (10.0.0.1:52664). Jan 14 01:20:52.382474 systemd-logind[1587]: Removed session 5. Jan 14 01:20:52.458056 sshd[1768]: Accepted publickey for core from 10.0.0.1 port 52664 ssh2: RSA SHA256:3qGrMVfuhKNIe5rlCK8c/D9IY3u9YaQGWBapsCdNUS0 Jan 14 01:20:52.459858 sshd-session[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:20:52.465878 systemd-logind[1587]: New session 6 of user core. Jan 14 01:20:52.479872 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 14 01:20:52.512262 sudo[1774]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 14 01:20:52.512768 sudo[1774]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:20:52.528351 sudo[1774]: pam_unix(sudo:session): session closed for user root Jan 14 01:20:52.531136 sshd[1773]: Connection closed by 10.0.0.1 port 52664 Jan 14 01:20:52.532012 sshd-session[1768]: pam_unix(sshd:session): session closed for user core Jan 14 01:20:52.543404 systemd[1]: sshd@4-10.0.0.134:22-10.0.0.1:52664.service: Deactivated successfully. Jan 14 01:20:52.545800 systemd[1]: session-6.scope: Deactivated successfully. Jan 14 01:20:52.547278 systemd-logind[1587]: Session 6 logged out. Waiting for processes to exit. Jan 14 01:20:52.550786 systemd[1]: Started sshd@5-10.0.0.134:22-10.0.0.1:52678.service - OpenSSH per-connection server daemon (10.0.0.1:52678). Jan 14 01:20:52.551943 systemd-logind[1587]: Removed session 6. Jan 14 01:20:52.639191 sshd[1781]: Accepted publickey for core from 10.0.0.1 port 52678 ssh2: RSA SHA256:3qGrMVfuhKNIe5rlCK8c/D9IY3u9YaQGWBapsCdNUS0 Jan 14 01:20:52.641634 sshd-session[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:20:52.648056 systemd-logind[1587]: New session 7 of user core. Jan 14 01:20:52.657745 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 14 01:20:52.683254 sudo[1787]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 14 01:20:52.683863 sudo[1787]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:20:52.689870 sudo[1787]: pam_unix(sudo:session): session closed for user root Jan 14 01:20:52.701174 sudo[1786]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 14 01:20:52.701741 sudo[1786]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:20:52.713373 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 01:20:52.781000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 01:20:52.783781 augenrules[1811]: No rules Jan 14 01:20:52.785824 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 01:20:52.786262 kernel: kauditd_printk_skb: 181 callbacks suppressed Jan 14 01:20:52.786311 kernel: audit: type=1305 audit(1768353652.781:226): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 01:20:52.786319 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 01:20:52.787972 sudo[1786]: pam_unix(sudo:session): session closed for user root Jan 14 01:20:52.790198 sshd[1785]: Connection closed by 10.0.0.1 port 52678 Jan 14 01:20:52.790858 sshd-session[1781]: pam_unix(sshd:session): session closed for user core Jan 14 01:20:52.781000 audit[1811]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd6a3459a0 a2=420 a3=0 items=0 ppid=1792 pid=1811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:52.808981 kernel: audit: type=1300 audit(1768353652.781:226): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd6a3459a0 a2=420 a3=0 items=0 ppid=1792 pid=1811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:52.809124 kernel: audit: type=1327 audit(1768353652.781:226): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 01:20:52.781000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 01:20:52.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:52.824126 kernel: audit: type=1130 audit(1768353652.785:227): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:52.824249 kernel: audit: type=1131 audit(1768353652.785:228): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:52.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:52.786000 audit[1786]: USER_END pid=1786 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:20:52.842996 kernel: audit: type=1106 audit(1768353652.786:229): pid=1786 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:20:52.843068 kernel: audit: type=1104 audit(1768353652.786:230): pid=1786 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:20:52.786000 audit[1786]: CRED_DISP pid=1786 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:20:52.847827 systemd[1]: sshd@5-10.0.0.134:22-10.0.0.1:52678.service: Deactivated successfully. Jan 14 01:20:52.850201 systemd[1]: session-7.scope: Deactivated successfully. Jan 14 01:20:52.791000 audit[1781]: USER_END pid=1781 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:20:52.851713 systemd-logind[1587]: Session 7 logged out. Waiting for processes to exit. Jan 14 01:20:52.854768 systemd[1]: Started sshd@6-10.0.0.134:22-10.0.0.1:52694.service - OpenSSH per-connection server daemon (10.0.0.1:52694). Jan 14 01:20:52.855747 systemd-logind[1587]: Removed session 7. Jan 14 01:20:52.864804 kernel: audit: type=1106 audit(1768353652.791:231): pid=1781 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:20:52.864864 kernel: audit: type=1104 audit(1768353652.791:232): pid=1781 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:20:52.864917 kernel: audit: type=1131 audit(1768353652.847:233): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.134:22-10.0.0.1:52678 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:52.791000 audit[1781]: CRED_DISP pid=1781 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:20:52.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.134:22-10.0.0.1:52678 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:52.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.134:22-10.0.0.1:52694 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:52.949000 audit[1820]: USER_ACCT pid=1820 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:20:52.951882 sshd[1820]: Accepted publickey for core from 10.0.0.1 port 52694 ssh2: RSA SHA256:3qGrMVfuhKNIe5rlCK8c/D9IY3u9YaQGWBapsCdNUS0 Jan 14 01:20:52.951000 audit[1820]: CRED_ACQ pid=1820 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:20:52.951000 audit[1820]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd431b68b0 a2=3 a3=0 items=0 ppid=1 pid=1820 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:52.951000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:20:52.953909 sshd-session[1820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:20:52.960477 systemd-logind[1587]: New session 8 of user core. Jan 14 01:20:52.975860 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 14 01:20:52.979000 audit[1820]: USER_START pid=1820 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:20:52.982000 audit[1824]: CRED_ACQ pid=1824 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:20:52.999000 audit[1825]: USER_ACCT pid=1825 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:20:53.000030 sudo[1825]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 14 01:20:52.999000 audit[1825]: CRED_REFR pid=1825 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:20:53.000678 sudo[1825]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:20:53.000000 audit[1825]: USER_START pid=1825 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:20:53.462801 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 14 01:20:53.490126 (dockerd)[1846]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 14 01:20:53.803725 dockerd[1846]: time="2026-01-14T01:20:53.803623710Z" level=info msg="Starting up" Jan 14 01:20:53.805196 dockerd[1846]: time="2026-01-14T01:20:53.805127275Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 14 01:20:53.825196 dockerd[1846]: time="2026-01-14T01:20:53.825058468Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 14 01:20:53.893570 dockerd[1846]: time="2026-01-14T01:20:53.893357805Z" level=info msg="Loading containers: start." Jan 14 01:20:53.908771 kernel: Initializing XFRM netlink socket Jan 14 01:20:54.018000 audit[1900]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1900 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:54.018000 audit[1900]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fffba6c3ab0 a2=0 a3=0 items=0 ppid=1846 pid=1900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:54.018000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 01:20:54.024000 audit[1902]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1902 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:54.024000 audit[1902]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc4feea180 a2=0 a3=0 items=0 ppid=1846 pid=1902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:54.024000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 01:20:54.028000 audit[1904]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1904 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:54.028000 audit[1904]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc2c9003d0 a2=0 a3=0 items=0 ppid=1846 pid=1904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:54.028000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 01:20:54.033000 audit[1906]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1906 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:54.033000 audit[1906]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeced6ea50 a2=0 a3=0 items=0 ppid=1846 pid=1906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:54.033000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 01:20:54.038000 audit[1908]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1908 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:54.038000 audit[1908]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffeb57ed5a0 a2=0 a3=0 items=0 ppid=1846 pid=1908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:54.038000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 01:20:54.043000 audit[1910]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1910 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:54.043000 audit[1910]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffdca8f28a0 a2=0 a3=0 items=0 ppid=1846 pid=1910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:54.043000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:20:54.047000 audit[1912]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1912 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:54.047000 audit[1912]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd29540a20 a2=0 a3=0 items=0 ppid=1846 pid=1912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:54.047000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 01:20:54.053000 audit[1914]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1914 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:54.053000 audit[1914]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffdc8039430 a2=0 a3=0 items=0 ppid=1846 pid=1914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:54.053000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 01:20:54.094000 audit[1917]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1917 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:54.094000 audit[1917]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffed4e9baf0 a2=0 a3=0 items=0 ppid=1846 pid=1917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:54.094000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 14 01:20:54.099000 audit[1919]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1919 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:54.099000 audit[1919]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc9e582570 a2=0 a3=0 items=0 ppid=1846 pid=1919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:54.099000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 01:20:54.103000 audit[1921]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1921 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:54.103000 audit[1921]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fffff09bfc0 a2=0 a3=0 items=0 ppid=1846 pid=1921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:54.103000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 01:20:54.108000 audit[1923]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1923 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:54.108000 audit[1923]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffcdce315e0 a2=0 a3=0 items=0 ppid=1846 pid=1923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:54.108000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:20:54.112000 audit[1925]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1925 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:54.112000 audit[1925]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffe161ef5d0 a2=0 a3=0 items=0 ppid=1846 pid=1925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:54.112000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 01:20:54.187000 audit[1955]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1955 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:54.187000 audit[1955]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc4b547bb0 a2=0 a3=0 items=0 ppid=1846 pid=1955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:54.187000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 01:20:54.192000 audit[1957]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1957 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:54.192000 audit[1957]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fffc6676060 a2=0 a3=0 items=0 ppid=1846 pid=1957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:54.192000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 01:20:54.196000 audit[1959]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1959 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:54.196000 audit[1959]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe167abfe0 a2=0 a3=0 items=0 ppid=1846 pid=1959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:54.196000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 01:20:54.201000 audit[1961]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1961 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:54.201000 audit[1961]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff593bdd00 a2=0 a3=0 items=0 ppid=1846 pid=1961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:54.201000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 01:20:54.207000 audit[1963]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1963 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:54.207000 audit[1963]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffeba2486a0 a2=0 a3=0 items=0 ppid=1846 pid=1963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:54.207000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 01:20:54.211000 audit[1965]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1965 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:54.211000 audit[1965]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffde1d877f0 a2=0 a3=0 items=0 ppid=1846 pid=1965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:54.211000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:20:54.216000 audit[1967]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1967 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:54.216000 audit[1967]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff1f806660 a2=0 a3=0 items=0 ppid=1846 pid=1967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:54.216000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 01:20:54.220000 audit[1969]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1969 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:54.220000 audit[1969]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffe09294620 a2=0 a3=0 items=0 ppid=1846 pid=1969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:54.220000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 01:20:54.226000 audit[1971]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1971 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:54.226000 audit[1971]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7fff53536490 a2=0 a3=0 items=0 ppid=1846 pid=1971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:54.226000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 14 01:20:54.231000 audit[1973]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=1973 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:54.231000 audit[1973]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff95becac0 a2=0 a3=0 items=0 ppid=1846 pid=1973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:54.231000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 01:20:54.234000 audit[1975]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=1975 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:54.234000 audit[1975]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fff62c4c090 a2=0 a3=0 items=0 ppid=1846 pid=1975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:54.234000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 01:20:54.238000 audit[1977]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=1977 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:54.238000 audit[1977]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fff1cabd300 a2=0 a3=0 items=0 ppid=1846 pid=1977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:54.238000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:20:54.243000 audit[1979]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=1979 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:54.243000 audit[1979]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffcb174c150 a2=0 a3=0 items=0 ppid=1846 pid=1979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:54.243000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 01:20:54.256000 audit[1984]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1984 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:54.256000 audit[1984]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc453ec210 a2=0 a3=0 items=0 ppid=1846 pid=1984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:54.256000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 01:20:54.261000 audit[1986]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1986 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:54.261000 audit[1986]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffdaa266840 a2=0 a3=0 items=0 ppid=1846 pid=1986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:54.261000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 01:20:54.265000 audit[1988]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1988 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:54.265000 audit[1988]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd3bda7fe0 a2=0 a3=0 items=0 ppid=1846 pid=1988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:54.265000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 01:20:54.269000 audit[1990]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=1990 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:54.269000 audit[1990]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff63bca830 a2=0 a3=0 items=0 ppid=1846 pid=1990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:54.269000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 01:20:54.274000 audit[1992]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=1992 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:54.274000 audit[1992]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffee6046a10 a2=0 a3=0 items=0 ppid=1846 pid=1992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:54.274000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 01:20:54.278000 audit[1994]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=1994 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:54.278000 audit[1994]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fffb32eda50 a2=0 a3=0 items=0 ppid=1846 pid=1994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:54.278000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 01:20:54.303840 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 14 01:20:54.302000 audit[1998]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=1998 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:54.302000 audit[1998]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffd86459fb0 a2=0 a3=0 items=0 ppid=1846 pid=1998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:54.302000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 14 01:20:54.306708 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:20:54.308000 audit[2001]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2001 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:54.308000 audit[2001]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffcd5f149f0 a2=0 a3=0 items=0 ppid=1846 pid=2001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:54.308000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 14 01:20:54.333000 audit[2011]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2011 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:54.333000 audit[2011]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7fff49495150 a2=0 a3=0 items=0 ppid=1846 pid=2011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:54.333000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 14 01:20:54.358000 audit[2017]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2017 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:54.358000 audit[2017]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fffd4ceab00 a2=0 a3=0 items=0 ppid=1846 pid=2017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:54.358000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 14 01:20:54.363000 audit[2019]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2019 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:54.363000 audit[2019]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffc92719650 a2=0 a3=0 items=0 ppid=1846 pid=2019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:54.363000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 14 01:20:54.368000 audit[2021]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2021 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:54.368000 audit[2021]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd1f43ded0 a2=0 a3=0 items=0 ppid=1846 pid=2021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:54.368000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 14 01:20:54.374000 audit[2023]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2023 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:54.374000 audit[2023]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffe87388af0 a2=0 a3=0 items=0 ppid=1846 pid=2023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:54.374000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 01:20:54.379000 audit[2025]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2025 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:54.379000 audit[2025]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd833c2bb0 a2=0 a3=0 items=0 ppid=1846 pid=2025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:54.379000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 14 01:20:54.381144 systemd-networkd[1516]: docker0: Link UP Jan 14 01:20:54.481675 dockerd[1846]: time="2026-01-14T01:20:54.481336328Z" level=info msg="Loading containers: done." Jan 14 01:20:54.505129 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1134914793-merged.mount: Deactivated successfully. Jan 14 01:20:54.516065 dockerd[1846]: time="2026-01-14T01:20:54.515964655Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 14 01:20:54.516217 dockerd[1846]: time="2026-01-14T01:20:54.516105890Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 14 01:20:54.516494 dockerd[1846]: time="2026-01-14T01:20:54.516348497Z" level=info msg="Initializing buildkit" Jan 14 01:20:54.527187 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:20:54.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:54.544276 (kubelet)[2038]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:20:54.568905 dockerd[1846]: time="2026-01-14T01:20:54.568859670Z" level=info msg="Completed buildkit initialization" Jan 14 01:20:54.573577 dockerd[1846]: time="2026-01-14T01:20:54.573098500Z" level=info msg="Daemon has completed initialization" Jan 14 01:20:54.573577 dockerd[1846]: time="2026-01-14T01:20:54.573134098Z" level=info msg="API listen on /run/docker.sock" Jan 14 01:20:54.574384 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 14 01:20:54.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:54.600937 kubelet[2038]: E0114 01:20:54.600792 2038 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:20:54.606424 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:20:54.606729 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:20:54.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:20:54.607317 systemd[1]: kubelet.service: Consumed 240ms CPU time, 110.9M memory peak. Jan 14 01:20:55.480756 containerd[1611]: time="2026-01-14T01:20:55.480676375Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 14 01:20:56.272380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2764972711.mount: Deactivated successfully. Jan 14 01:20:57.389420 containerd[1611]: time="2026-01-14T01:20:57.389288459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:20:57.390641 containerd[1611]: time="2026-01-14T01:20:57.390487379Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=28445968" Jan 14 01:20:57.392034 containerd[1611]: time="2026-01-14T01:20:57.391961579Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:20:57.395911 containerd[1611]: time="2026-01-14T01:20:57.395821285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:20:57.396931 containerd[1611]: time="2026-01-14T01:20:57.396858638Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 1.916099341s" Jan 14 01:20:57.396931 containerd[1611]: time="2026-01-14T01:20:57.396894298Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 14 01:20:57.398153 containerd[1611]: time="2026-01-14T01:20:57.398057940Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 14 01:20:58.880648 containerd[1611]: time="2026-01-14T01:20:58.880361761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:20:58.881690 containerd[1611]: time="2026-01-14T01:20:58.881627823Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26008626" Jan 14 01:20:58.883258 containerd[1611]: time="2026-01-14T01:20:58.883175081Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:20:58.886626 containerd[1611]: time="2026-01-14T01:20:58.886593443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:20:58.887280 containerd[1611]: time="2026-01-14T01:20:58.887181262Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.489053588s" Jan 14 01:20:58.887315 containerd[1611]: time="2026-01-14T01:20:58.887290901Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 14 01:20:58.888758 containerd[1611]: time="2026-01-14T01:20:58.888305817Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 14 01:21:00.213437 containerd[1611]: time="2026-01-14T01:21:00.213329848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:21:00.214247 containerd[1611]: time="2026-01-14T01:21:00.214210097Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20149965" Jan 14 01:21:00.215722 containerd[1611]: time="2026-01-14T01:21:00.215617243Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:21:00.218442 containerd[1611]: time="2026-01-14T01:21:00.218376088Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:21:00.219242 containerd[1611]: time="2026-01-14T01:21:00.219221564Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.33088318s" Jan 14 01:21:00.219339 containerd[1611]: time="2026-01-14T01:21:00.219246035Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 14 01:21:00.220031 containerd[1611]: time="2026-01-14T01:21:00.219968187Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 14 01:21:01.326032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2219908524.mount: Deactivated successfully. Jan 14 01:21:01.819375 containerd[1611]: time="2026-01-14T01:21:01.819201687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:21:01.820624 containerd[1611]: time="2026-01-14T01:21:01.820580236Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=0" Jan 14 01:21:01.821832 containerd[1611]: time="2026-01-14T01:21:01.821771236Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:21:01.825314 containerd[1611]: time="2026-01-14T01:21:01.824810745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:21:01.825625 containerd[1611]: time="2026-01-14T01:21:01.825487820Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.605489378s" Jan 14 01:21:01.826108 containerd[1611]: time="2026-01-14T01:21:01.826001845Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 14 01:21:01.826855 containerd[1611]: time="2026-01-14T01:21:01.826787163Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 14 01:21:02.364769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1168843795.mount: Deactivated successfully. Jan 14 01:21:03.312523 containerd[1611]: time="2026-01-14T01:21:03.312438685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:21:03.313267 containerd[1611]: time="2026-01-14T01:21:03.313196324Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20257574" Jan 14 01:21:03.314670 containerd[1611]: time="2026-01-14T01:21:03.314608759Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:21:03.319074 containerd[1611]: time="2026-01-14T01:21:03.318987551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:21:03.319639 containerd[1611]: time="2026-01-14T01:21:03.319575472Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.492556978s" Jan 14 01:21:03.319639 containerd[1611]: time="2026-01-14T01:21:03.319631999Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 14 01:21:03.320569 containerd[1611]: time="2026-01-14T01:21:03.320409440Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 14 01:21:03.689291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount361187742.mount: Deactivated successfully. Jan 14 01:21:03.696328 containerd[1611]: time="2026-01-14T01:21:03.696195419Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 01:21:03.697958 containerd[1611]: time="2026-01-14T01:21:03.697876942Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 14 01:21:03.699386 containerd[1611]: time="2026-01-14T01:21:03.699300952Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 01:21:03.702339 containerd[1611]: time="2026-01-14T01:21:03.702251236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 01:21:03.702869 containerd[1611]: time="2026-01-14T01:21:03.702771683Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 382.316947ms" Jan 14 01:21:03.702869 containerd[1611]: time="2026-01-14T01:21:03.702836722Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 14 01:21:03.703424 containerd[1611]: time="2026-01-14T01:21:03.703355524Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 14 01:21:04.128248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3822420298.mount: Deactivated successfully. Jan 14 01:21:04.804660 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 14 01:21:04.809817 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:21:05.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:21:05.020793 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:21:05.026710 kernel: kauditd_printk_skb: 134 callbacks suppressed Jan 14 01:21:05.026776 kernel: audit: type=1130 audit(1768353665.019:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:21:05.053977 (kubelet)[2269]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:21:05.112119 kubelet[2269]: E0114 01:21:05.111918 2269 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:21:05.115845 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:21:05.116069 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:21:05.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:21:05.117026 systemd[1]: kubelet.service: Consumed 222ms CPU time, 109.3M memory peak. Jan 14 01:21:05.130610 kernel: audit: type=1131 audit(1768353665.116:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:21:06.391489 containerd[1611]: time="2026-01-14T01:21:06.391296291Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:21:06.392153 containerd[1611]: time="2026-01-14T01:21:06.392099955Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=46127678" Jan 14 01:21:06.393643 containerd[1611]: time="2026-01-14T01:21:06.393582195Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:21:06.396456 containerd[1611]: time="2026-01-14T01:21:06.396390253Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:21:06.397387 containerd[1611]: time="2026-01-14T01:21:06.397303489Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.693900295s" Jan 14 01:21:06.397387 containerd[1611]: time="2026-01-14T01:21:06.397376684Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 14 01:21:09.391390 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:21:09.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:21:09.391948 systemd[1]: kubelet.service: Consumed 222ms CPU time, 109.3M memory peak. Jan 14 01:21:09.395620 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:21:09.390000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:21:09.409816 kernel: audit: type=1130 audit(1768353669.390:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:21:09.409887 kernel: audit: type=1131 audit(1768353669.390:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:21:09.426848 systemd[1]: Reload requested from client PID 2315 ('systemctl') (unit session-8.scope)... Jan 14 01:21:09.426894 systemd[1]: Reloading... Jan 14 01:21:09.528611 zram_generator::config[2367]: No configuration found. Jan 14 01:21:09.753939 systemd[1]: Reloading finished in 326 ms. Jan 14 01:21:09.786000 audit: BPF prog-id=63 op=LOAD Jan 14 01:21:09.786000 audit: BPF prog-id=60 op=UNLOAD Jan 14 01:21:09.794602 kernel: audit: type=1334 audit(1768353669.786:290): prog-id=63 op=LOAD Jan 14 01:21:09.794640 kernel: audit: type=1334 audit(1768353669.786:291): prog-id=60 op=UNLOAD Jan 14 01:21:09.794675 kernel: audit: type=1334 audit(1768353669.787:292): prog-id=64 op=LOAD Jan 14 01:21:09.787000 audit: BPF prog-id=64 op=LOAD Jan 14 01:21:09.787000 audit: BPF prog-id=65 op=LOAD Jan 14 01:21:09.800776 kernel: audit: type=1334 audit(1768353669.787:293): prog-id=65 op=LOAD Jan 14 01:21:09.800816 kernel: audit: type=1334 audit(1768353669.787:294): prog-id=61 op=UNLOAD Jan 14 01:21:09.787000 audit: BPF prog-id=61 op=UNLOAD Jan 14 01:21:09.787000 audit: BPF prog-id=62 op=UNLOAD Jan 14 01:21:09.807654 kernel: audit: type=1334 audit(1768353669.787:295): prog-id=62 op=UNLOAD Jan 14 01:21:09.789000 audit: BPF prog-id=66 op=LOAD Jan 14 01:21:09.789000 audit: BPF prog-id=59 op=UNLOAD Jan 14 01:21:09.790000 audit: BPF prog-id=67 op=LOAD Jan 14 01:21:09.790000 audit: BPF prog-id=45 op=UNLOAD Jan 14 01:21:09.790000 audit: BPF prog-id=68 op=LOAD Jan 14 01:21:09.790000 audit: BPF prog-id=69 op=LOAD Jan 14 01:21:09.790000 audit: BPF prog-id=46 op=UNLOAD Jan 14 01:21:09.790000 audit: BPF prog-id=47 op=UNLOAD Jan 14 01:21:09.791000 audit: BPF prog-id=70 op=LOAD Jan 14 01:21:09.813000 audit: BPF prog-id=51 op=UNLOAD Jan 14 01:21:09.813000 audit: BPF prog-id=71 op=LOAD Jan 14 01:21:09.813000 audit: BPF prog-id=72 op=LOAD Jan 14 01:21:09.814000 audit: BPF prog-id=43 op=UNLOAD Jan 14 01:21:09.814000 audit: BPF prog-id=44 op=UNLOAD Jan 14 01:21:09.815000 audit: BPF prog-id=73 op=LOAD Jan 14 01:21:09.815000 audit: BPF prog-id=55 op=UNLOAD Jan 14 01:21:09.815000 audit: BPF prog-id=74 op=LOAD Jan 14 01:21:09.815000 audit: BPF prog-id=75 op=LOAD Jan 14 01:21:09.815000 audit: BPF prog-id=56 op=UNLOAD Jan 14 01:21:09.815000 audit: BPF prog-id=57 op=UNLOAD Jan 14 01:21:09.816000 audit: BPF prog-id=76 op=LOAD Jan 14 01:21:09.816000 audit: BPF prog-id=48 op=UNLOAD Jan 14 01:21:09.817000 audit: BPF prog-id=77 op=LOAD Jan 14 01:21:09.817000 audit: BPF prog-id=78 op=LOAD Jan 14 01:21:09.817000 audit: BPF prog-id=49 op=UNLOAD Jan 14 01:21:09.817000 audit: BPF prog-id=50 op=UNLOAD Jan 14 01:21:09.818000 audit: BPF prog-id=79 op=LOAD Jan 14 01:21:09.818000 audit: BPF prog-id=58 op=UNLOAD Jan 14 01:21:09.818000 audit: BPF prog-id=80 op=LOAD Jan 14 01:21:09.819000 audit: BPF prog-id=52 op=UNLOAD Jan 14 01:21:09.819000 audit: BPF prog-id=81 op=LOAD Jan 14 01:21:09.819000 audit: BPF prog-id=82 op=LOAD Jan 14 01:21:09.819000 audit: BPF prog-id=53 op=UNLOAD Jan 14 01:21:09.819000 audit: BPF prog-id=54 op=UNLOAD Jan 14 01:21:09.843203 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 14 01:21:09.843341 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 14 01:21:09.843821 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:21:09.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:21:09.844017 systemd[1]: kubelet.service: Consumed 148ms CPU time, 98.7M memory peak. Jan 14 01:21:09.846728 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:21:10.070340 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:21:10.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:21:10.073576 kernel: kauditd_printk_skb: 35 callbacks suppressed Jan 14 01:21:10.073675 kernel: audit: type=1130 audit(1768353670.070:331): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:21:10.091026 (kubelet)[2409]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 01:21:10.142073 kubelet[2409]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 01:21:10.142073 kubelet[2409]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 01:21:10.142073 kubelet[2409]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 01:21:10.142073 kubelet[2409]: I0114 01:21:10.141839 2409 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 01:21:10.590450 kubelet[2409]: I0114 01:21:10.590359 2409 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 14 01:21:10.590450 kubelet[2409]: I0114 01:21:10.590428 2409 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 01:21:10.590804 kubelet[2409]: I0114 01:21:10.590742 2409 server.go:956] "Client rotation is on, will bootstrap in background" Jan 14 01:21:10.621160 kubelet[2409]: I0114 01:21:10.621097 2409 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 01:21:10.621659 kubelet[2409]: E0114 01:21:10.621484 2409 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.134:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 14 01:21:10.630818 kubelet[2409]: I0114 01:21:10.630694 2409 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 01:21:10.638658 kubelet[2409]: I0114 01:21:10.638479 2409 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 01:21:10.638899 kubelet[2409]: I0114 01:21:10.638789 2409 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 01:21:10.638952 kubelet[2409]: I0114 01:21:10.638808 2409 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 01:21:10.638952 kubelet[2409]: I0114 01:21:10.638941 2409 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 01:21:10.638952 kubelet[2409]: I0114 01:21:10.638950 2409 container_manager_linux.go:303] "Creating device plugin manager" Jan 14 01:21:10.639227 kubelet[2409]: I0114 01:21:10.639122 2409 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:21:10.642077 kubelet[2409]: I0114 01:21:10.641978 2409 kubelet.go:480] "Attempting to sync node with API server" Jan 14 01:21:10.642077 kubelet[2409]: I0114 01:21:10.642070 2409 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 01:21:10.642150 kubelet[2409]: I0114 01:21:10.642097 2409 kubelet.go:386] "Adding apiserver pod source" Jan 14 01:21:10.642150 kubelet[2409]: I0114 01:21:10.642111 2409 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 01:21:10.646584 kubelet[2409]: E0114 01:21:10.645376 2409 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 14 01:21:10.646584 kubelet[2409]: E0114 01:21:10.645376 2409 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 14 01:21:10.647929 kubelet[2409]: I0114 01:21:10.647868 2409 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 01:21:10.648749 kubelet[2409]: I0114 01:21:10.648597 2409 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 14 01:21:10.649687 kubelet[2409]: W0114 01:21:10.649669 2409 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 14 01:21:10.653993 kubelet[2409]: I0114 01:21:10.653925 2409 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 14 01:21:10.654160 kubelet[2409]: I0114 01:21:10.654119 2409 server.go:1289] "Started kubelet" Jan 14 01:21:10.656159 kubelet[2409]: I0114 01:21:10.655403 2409 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 01:21:10.656159 kubelet[2409]: I0114 01:21:10.655981 2409 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 01:21:10.657003 kubelet[2409]: I0114 01:21:10.656935 2409 server.go:317] "Adding debug handlers to kubelet server" Jan 14 01:21:10.658450 kubelet[2409]: I0114 01:21:10.658400 2409 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 01:21:10.661150 kubelet[2409]: E0114 01:21:10.659344 2409 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.134:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.134:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188a744b5896e2d1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-14 01:21:10.653993681 +0000 UTC m=+0.557091655,LastTimestamp:2026-01-14 01:21:10.653993681 +0000 UTC m=+0.557091655,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 14 01:21:10.662617 kubelet[2409]: E0114 01:21:10.662433 2409 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 01:21:10.662617 kubelet[2409]: I0114 01:21:10.662596 2409 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 14 01:21:10.662971 kubelet[2409]: I0114 01:21:10.662839 2409 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 14 01:21:10.662971 kubelet[2409]: I0114 01:21:10.662947 2409 reconciler.go:26] "Reconciler: start to sync state" Jan 14 01:21:10.663302 kubelet[2409]: I0114 01:21:10.663094 2409 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 01:21:10.663715 kubelet[2409]: E0114 01:21:10.663664 2409 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 01:21:10.663821 kubelet[2409]: E0114 01:21:10.663786 2409 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 14 01:21:10.664168 kubelet[2409]: I0114 01:21:10.664155 2409 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 01:21:10.664348 kubelet[2409]: I0114 01:21:10.664257 2409 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 01:21:10.665270 kubelet[2409]: E0114 01:21:10.665116 2409 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="200ms" Jan 14 01:21:10.666571 kubelet[2409]: I0114 01:21:10.665986 2409 factory.go:223] Registration of the containerd container factory successfully Jan 14 01:21:10.666571 kubelet[2409]: I0114 01:21:10.666000 2409 factory.go:223] Registration of the systemd container factory successfully Jan 14 01:21:10.669000 audit[2426]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2426 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:21:10.669000 audit[2426]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd625b2bb0 a2=0 a3=0 items=0 ppid=2409 pid=2426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:10.689807 kubelet[2409]: I0114 01:21:10.683096 2409 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 01:21:10.689807 kubelet[2409]: I0114 01:21:10.683109 2409 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 01:21:10.689807 kubelet[2409]: I0114 01:21:10.683124 2409 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:21:10.699191 kernel: audit: type=1325 audit(1768353670.669:332): table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2426 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:21:10.699301 kernel: audit: type=1300 audit(1768353670.669:332): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd625b2bb0 a2=0 a3=0 items=0 ppid=2409 pid=2426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:10.699342 kernel: audit: type=1327 audit(1768353670.669:332): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 01:21:10.669000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 01:21:10.673000 audit[2427]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2427 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:21:10.718484 kernel: audit: type=1325 audit(1768353670.673:333): table=filter:43 family=2 entries=1 op=nft_register_chain pid=2427 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:21:10.718641 kernel: audit: type=1300 audit(1768353670.673:333): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd99e96e00 a2=0 a3=0 items=0 ppid=2409 pid=2427 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:10.673000 audit[2427]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd99e96e00 a2=0 a3=0 items=0 ppid=2409 pid=2427 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:10.673000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 01:21:10.739696 kernel: audit: type=1327 audit(1768353670.673:333): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 01:21:10.739765 kernel: audit: type=1325 audit(1768353670.678:334): table=filter:44 family=2 entries=2 op=nft_register_chain pid=2431 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:21:10.678000 audit[2431]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2431 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:21:10.678000 audit[2431]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd4bda3980 a2=0 a3=0 items=0 ppid=2409 pid=2431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:10.761669 kernel: audit: type=1300 audit(1768353670.678:334): arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd4bda3980 a2=0 a3=0 items=0 ppid=2409 pid=2431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:10.761728 kernel: audit: type=1327 audit(1768353670.678:334): proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:21:10.678000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:21:10.763038 kubelet[2409]: E0114 01:21:10.762968 2409 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 01:21:10.684000 audit[2433]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2433 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:21:10.684000 audit[2433]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff91eb4940 a2=0 a3=0 items=0 ppid=2409 pid=2433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:10.684000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:21:10.863668 kubelet[2409]: E0114 01:21:10.863316 2409 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 01:21:10.866484 kubelet[2409]: E0114 01:21:10.866352 2409 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="400ms" Jan 14 01:21:10.888306 kubelet[2409]: I0114 01:21:10.888056 2409 policy_none.go:49] "None policy: Start" Jan 14 01:21:10.888306 kubelet[2409]: I0114 01:21:10.888122 2409 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 14 01:21:10.888306 kubelet[2409]: I0114 01:21:10.888137 2409 state_mem.go:35] "Initializing new in-memory state store" Jan 14 01:21:10.896000 audit[2439]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2439 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:21:10.896000 audit[2439]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffc440eae30 a2=0 a3=0 items=0 ppid=2409 pid=2439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:10.896000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 14 01:21:10.898277 kubelet[2409]: I0114 01:21:10.898151 2409 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 14 01:21:10.900380 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 14 01:21:10.898000 audit[2442]: NETFILTER_CFG table=mangle:47 family=2 entries=1 op=nft_register_chain pid=2442 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:21:10.898000 audit[2442]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd12609b60 a2=0 a3=0 items=0 ppid=2409 pid=2442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:10.898000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 01:21:10.899000 audit[2441]: NETFILTER_CFG table=mangle:48 family=10 entries=2 op=nft_register_chain pid=2441 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:21:10.899000 audit[2441]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc37298030 a2=0 a3=0 items=0 ppid=2409 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:10.899000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 01:21:10.901824 kubelet[2409]: I0114 01:21:10.901464 2409 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 14 01:21:10.901824 kubelet[2409]: I0114 01:21:10.901488 2409 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 14 01:21:10.901824 kubelet[2409]: I0114 01:21:10.901624 2409 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 01:21:10.901824 kubelet[2409]: I0114 01:21:10.901635 2409 kubelet.go:2436] "Starting kubelet main sync loop" Jan 14 01:21:10.901824 kubelet[2409]: E0114 01:21:10.901683 2409 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 01:21:10.903884 kubelet[2409]: E0114 01:21:10.903793 2409 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 14 01:21:10.902000 audit[2443]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2443 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:21:10.902000 audit[2443]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffbae30910 a2=0 a3=0 items=0 ppid=2409 pid=2443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:10.902000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 01:21:10.902000 audit[2444]: NETFILTER_CFG table=mangle:50 family=10 entries=1 op=nft_register_chain pid=2444 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:21:10.902000 audit[2444]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd56ecbc40 a2=0 a3=0 items=0 ppid=2409 pid=2444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:10.902000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 01:21:10.905000 audit[2448]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2448 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:21:10.905000 audit[2448]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffffd10fae0 a2=0 a3=0 items=0 ppid=2409 pid=2448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:10.905000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 01:21:10.905000 audit[2447]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2447 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:21:10.905000 audit[2447]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc2d2ede00 a2=0 a3=0 items=0 ppid=2409 pid=2447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:10.905000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 01:21:10.908000 audit[2449]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2449 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:21:10.908000 audit[2449]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeba254280 a2=0 a3=0 items=0 ppid=2409 pid=2449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:10.908000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 01:21:10.914584 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 14 01:21:10.919835 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 14 01:21:10.931661 kubelet[2409]: E0114 01:21:10.931034 2409 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 14 01:21:10.931661 kubelet[2409]: I0114 01:21:10.931301 2409 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 01:21:10.931661 kubelet[2409]: I0114 01:21:10.931318 2409 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 01:21:10.931661 kubelet[2409]: I0114 01:21:10.931592 2409 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 01:21:10.933608 kubelet[2409]: E0114 01:21:10.933494 2409 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 01:21:10.933674 kubelet[2409]: E0114 01:21:10.933616 2409 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 14 01:21:11.019833 systemd[1]: Created slice kubepods-burstable-pod9807b6bdc071373d0a5826ec3470e31f.slice - libcontainer container kubepods-burstable-pod9807b6bdc071373d0a5826ec3470e31f.slice. Jan 14 01:21:11.033142 kubelet[2409]: I0114 01:21:11.033081 2409 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 01:21:11.033447 kubelet[2409]: E0114 01:21:11.033406 2409 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Jan 14 01:21:11.054024 kubelet[2409]: E0114 01:21:11.053903 2409 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 01:21:11.058621 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Jan 14 01:21:11.062111 kubelet[2409]: E0114 01:21:11.062056 2409 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 01:21:11.064939 kubelet[2409]: I0114 01:21:11.064758 2409 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 01:21:11.064939 kubelet[2409]: I0114 01:21:11.064830 2409 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 01:21:11.064939 kubelet[2409]: I0114 01:21:11.064863 2409 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9807b6bdc071373d0a5826ec3470e31f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9807b6bdc071373d0a5826ec3470e31f\") " pod="kube-system/kube-apiserver-localhost" Jan 14 01:21:11.064939 kubelet[2409]: I0114 01:21:11.064889 2409 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9807b6bdc071373d0a5826ec3470e31f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9807b6bdc071373d0a5826ec3470e31f\") " pod="kube-system/kube-apiserver-localhost" Jan 14 01:21:11.064939 kubelet[2409]: I0114 01:21:11.064913 2409 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 01:21:11.065130 kubelet[2409]: I0114 01:21:11.064937 2409 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 01:21:11.065130 kubelet[2409]: I0114 01:21:11.064961 2409 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 01:21:11.065130 kubelet[2409]: I0114 01:21:11.064983 2409 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 14 01:21:11.065130 kubelet[2409]: I0114 01:21:11.065002 2409 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9807b6bdc071373d0a5826ec3470e31f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9807b6bdc071373d0a5826ec3470e31f\") " pod="kube-system/kube-apiserver-localhost" Jan 14 01:21:11.065939 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Jan 14 01:21:11.068905 kubelet[2409]: E0114 01:21:11.068807 2409 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 01:21:11.236354 kubelet[2409]: I0114 01:21:11.236161 2409 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 01:21:11.236903 kubelet[2409]: E0114 01:21:11.236819 2409 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Jan 14 01:21:11.268604 kubelet[2409]: E0114 01:21:11.268373 2409 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="800ms" Jan 14 01:21:11.355329 kubelet[2409]: E0114 01:21:11.355143 2409 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:21:11.356379 containerd[1611]: time="2026-01-14T01:21:11.356283675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9807b6bdc071373d0a5826ec3470e31f,Namespace:kube-system,Attempt:0,}" Jan 14 01:21:11.362882 kubelet[2409]: E0114 01:21:11.362847 2409 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:21:11.363230 containerd[1611]: time="2026-01-14T01:21:11.363161087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Jan 14 01:21:11.370612 kubelet[2409]: E0114 01:21:11.370084 2409 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:21:11.372012 containerd[1611]: time="2026-01-14T01:21:11.371938739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Jan 14 01:21:11.397497 containerd[1611]: time="2026-01-14T01:21:11.397417727Z" level=info msg="connecting to shim 4770c55945749f0b9c064fc71972b8fe2fddc764b1a80ed40d9a581d143d582a" address="unix:///run/containerd/s/5171fb703cf2050872cfdb4d7be0a7db4d504f2b0d6c86b40bc5783d08625d59" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:21:11.415685 containerd[1611]: time="2026-01-14T01:21:11.415631195Z" level=info msg="connecting to shim 3fcb805df0162498775791fbe5102b5a9867a4f8a774e3a83fdc66f0841dda8c" address="unix:///run/containerd/s/259feb8f0f3c56576a1dd2fba6048ff92f1130fbd7a037bdcd484dd4bf84a6b3" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:21:11.423373 containerd[1611]: time="2026-01-14T01:21:11.423334776Z" level=info msg="connecting to shim a7937b2a5e8d4910c448b34b42dcc8e43fe32f1c83ccc6d9b00b2bb1e2e707e3" address="unix:///run/containerd/s/94e2c1195056bccaca99d1f4a4093066a3864e03498b9c04125a1e2242b7961d" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:21:11.457058 systemd[1]: Started cri-containerd-4770c55945749f0b9c064fc71972b8fe2fddc764b1a80ed40d9a581d143d582a.scope - libcontainer container 4770c55945749f0b9c064fc71972b8fe2fddc764b1a80ed40d9a581d143d582a. Jan 14 01:21:11.462243 systemd[1]: Started cri-containerd-3fcb805df0162498775791fbe5102b5a9867a4f8a774e3a83fdc66f0841dda8c.scope - libcontainer container 3fcb805df0162498775791fbe5102b5a9867a4f8a774e3a83fdc66f0841dda8c. Jan 14 01:21:11.483762 systemd[1]: Started cri-containerd-a7937b2a5e8d4910c448b34b42dcc8e43fe32f1c83ccc6d9b00b2bb1e2e707e3.scope - libcontainer container a7937b2a5e8d4910c448b34b42dcc8e43fe32f1c83ccc6d9b00b2bb1e2e707e3. Jan 14 01:21:11.486000 audit: BPF prog-id=83 op=LOAD Jan 14 01:21:11.487000 audit: BPF prog-id=84 op=LOAD Jan 14 01:21:11.487000 audit[2486]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2459 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:11.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437373063353539343537343966306239633036346663373139373262 Jan 14 01:21:11.487000 audit: BPF prog-id=84 op=UNLOAD Jan 14 01:21:11.487000 audit[2486]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2459 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:11.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437373063353539343537343966306239633036346663373139373262 Jan 14 01:21:11.487000 audit: BPF prog-id=85 op=LOAD Jan 14 01:21:11.487000 audit[2486]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2459 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:11.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437373063353539343537343966306239633036346663373139373262 Jan 14 01:21:11.487000 audit: BPF prog-id=86 op=LOAD Jan 14 01:21:11.487000 audit[2486]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2459 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:11.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437373063353539343537343966306239633036346663373139373262 Jan 14 01:21:11.487000 audit: BPF prog-id=86 op=UNLOAD Jan 14 01:21:11.487000 audit[2486]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2459 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:11.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437373063353539343537343966306239633036346663373139373262 Jan 14 01:21:11.488000 audit: BPF prog-id=85 op=UNLOAD Jan 14 01:21:11.488000 audit[2486]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2459 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:11.488000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437373063353539343537343966306239633036346663373139373262 Jan 14 01:21:11.488000 audit: BPF prog-id=87 op=LOAD Jan 14 01:21:11.488000 audit[2486]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2459 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:11.488000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437373063353539343537343966306239633036346663373139373262 Jan 14 01:21:11.490000 audit: BPF prog-id=88 op=LOAD Jan 14 01:21:11.491000 audit: BPF prog-id=89 op=LOAD Jan 14 01:21:11.491000 audit[2509]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2478 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:11.491000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366636238303564663031363234393837373537393166626535313032 Jan 14 01:21:11.492000 audit: BPF prog-id=89 op=UNLOAD Jan 14 01:21:11.492000 audit[2509]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2478 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:11.492000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366636238303564663031363234393837373537393166626535313032 Jan 14 01:21:11.492000 audit: BPF prog-id=90 op=LOAD Jan 14 01:21:11.492000 audit[2509]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2478 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:11.492000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366636238303564663031363234393837373537393166626535313032 Jan 14 01:21:11.492000 audit: BPF prog-id=91 op=LOAD Jan 14 01:21:11.492000 audit[2509]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2478 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:11.492000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366636238303564663031363234393837373537393166626535313032 Jan 14 01:21:11.492000 audit: BPF prog-id=91 op=UNLOAD Jan 14 01:21:11.492000 audit[2509]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2478 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:11.492000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366636238303564663031363234393837373537393166626535313032 Jan 14 01:21:11.492000 audit: BPF prog-id=90 op=UNLOAD Jan 14 01:21:11.492000 audit[2509]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2478 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:11.492000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366636238303564663031363234393837373537393166626535313032 Jan 14 01:21:11.492000 audit: BPF prog-id=92 op=LOAD Jan 14 01:21:11.492000 audit[2509]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2478 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:11.492000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366636238303564663031363234393837373537393166626535313032 Jan 14 01:21:11.503000 audit: BPF prog-id=93 op=LOAD Jan 14 01:21:11.504000 audit: BPF prog-id=94 op=LOAD Jan 14 01:21:11.504000 audit[2525]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2487 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:11.504000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137393337623261356538643439313063343438623334623432646363 Jan 14 01:21:11.504000 audit: BPF prog-id=94 op=UNLOAD Jan 14 01:21:11.504000 audit[2525]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2487 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:11.504000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137393337623261356538643439313063343438623334623432646363 Jan 14 01:21:11.504000 audit: BPF prog-id=95 op=LOAD Jan 14 01:21:11.504000 audit[2525]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2487 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:11.504000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137393337623261356538643439313063343438623334623432646363 Jan 14 01:21:11.504000 audit: BPF prog-id=96 op=LOAD Jan 14 01:21:11.504000 audit[2525]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2487 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:11.504000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137393337623261356538643439313063343438623334623432646363 Jan 14 01:21:11.505000 audit: BPF prog-id=96 op=UNLOAD Jan 14 01:21:11.505000 audit[2525]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2487 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:11.505000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137393337623261356538643439313063343438623334623432646363 Jan 14 01:21:11.505000 audit: BPF prog-id=95 op=UNLOAD Jan 14 01:21:11.505000 audit[2525]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2487 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:11.505000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137393337623261356538643439313063343438623334623432646363 Jan 14 01:21:11.505000 audit: BPF prog-id=97 op=LOAD Jan 14 01:21:11.505000 audit[2525]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2487 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:11.505000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137393337623261356538643439313063343438623334623432646363 Jan 14 01:21:11.542698 containerd[1611]: time="2026-01-14T01:21:11.542483158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9807b6bdc071373d0a5826ec3470e31f,Namespace:kube-system,Attempt:0,} returns sandbox id \"4770c55945749f0b9c064fc71972b8fe2fddc764b1a80ed40d9a581d143d582a\"" Jan 14 01:21:11.546067 kubelet[2409]: E0114 01:21:11.545973 2409 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:21:11.554237 containerd[1611]: time="2026-01-14T01:21:11.554143694Z" level=info msg="CreateContainer within sandbox \"4770c55945749f0b9c064fc71972b8fe2fddc764b1a80ed40d9a581d143d582a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 14 01:21:11.555706 containerd[1611]: time="2026-01-14T01:21:11.555596425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"3fcb805df0162498775791fbe5102b5a9867a4f8a774e3a83fdc66f0841dda8c\"" Jan 14 01:21:11.556438 kubelet[2409]: E0114 01:21:11.556416 2409 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:21:11.565999 containerd[1611]: time="2026-01-14T01:21:11.565772849Z" level=info msg="CreateContainer within sandbox \"3fcb805df0162498775791fbe5102b5a9867a4f8a774e3a83fdc66f0841dda8c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 14 01:21:11.566576 containerd[1611]: time="2026-01-14T01:21:11.566476662Z" level=info msg="Container 6bb3f53950ae9ec0aa796e9bbf87325a07ec3acd9cbcb48f1a945f3cf8b8a945: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:21:11.566941 containerd[1611]: time="2026-01-14T01:21:11.566918036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7937b2a5e8d4910c448b34b42dcc8e43fe32f1c83ccc6d9b00b2bb1e2e707e3\"" Jan 14 01:21:11.567849 kubelet[2409]: E0114 01:21:11.567779 2409 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:21:11.573420 containerd[1611]: time="2026-01-14T01:21:11.573248342Z" level=info msg="CreateContainer within sandbox \"a7937b2a5e8d4910c448b34b42dcc8e43fe32f1c83ccc6d9b00b2bb1e2e707e3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 14 01:21:11.581135 containerd[1611]: time="2026-01-14T01:21:11.581002600Z" level=info msg="CreateContainer within sandbox \"4770c55945749f0b9c064fc71972b8fe2fddc764b1a80ed40d9a581d143d582a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6bb3f53950ae9ec0aa796e9bbf87325a07ec3acd9cbcb48f1a945f3cf8b8a945\"" Jan 14 01:21:11.581363 containerd[1611]: time="2026-01-14T01:21:11.581275271Z" level=info msg="Container ab4725c62e2912493fce17ca5b0b000e9f4f70695fd4498b870fde22e032902e: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:21:11.582190 containerd[1611]: time="2026-01-14T01:21:11.582167886Z" level=info msg="StartContainer for \"6bb3f53950ae9ec0aa796e9bbf87325a07ec3acd9cbcb48f1a945f3cf8b8a945\"" Jan 14 01:21:11.583295 containerd[1611]: time="2026-01-14T01:21:11.583222899Z" level=info msg="connecting to shim 6bb3f53950ae9ec0aa796e9bbf87325a07ec3acd9cbcb48f1a945f3cf8b8a945" address="unix:///run/containerd/s/5171fb703cf2050872cfdb4d7be0a7db4d504f2b0d6c86b40bc5783d08625d59" protocol=ttrpc version=3 Jan 14 01:21:11.594638 containerd[1611]: time="2026-01-14T01:21:11.594585455Z" level=info msg="CreateContainer within sandbox \"3fcb805df0162498775791fbe5102b5a9867a4f8a774e3a83fdc66f0841dda8c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ab4725c62e2912493fce17ca5b0b000e9f4f70695fd4498b870fde22e032902e\"" Jan 14 01:21:11.595205 containerd[1611]: time="2026-01-14T01:21:11.595120665Z" level=info msg="StartContainer for \"ab4725c62e2912493fce17ca5b0b000e9f4f70695fd4498b870fde22e032902e\"" Jan 14 01:21:11.595699 containerd[1611]: time="2026-01-14T01:21:11.595623677Z" level=info msg="Container e0b909388b4b4c144f7d656af3a95ca9d4900e27a9a0e541a92d7159b969a1b9: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:21:11.596498 containerd[1611]: time="2026-01-14T01:21:11.596433890Z" level=info msg="connecting to shim ab4725c62e2912493fce17ca5b0b000e9f4f70695fd4498b870fde22e032902e" address="unix:///run/containerd/s/259feb8f0f3c56576a1dd2fba6048ff92f1130fbd7a037bdcd484dd4bf84a6b3" protocol=ttrpc version=3 Jan 14 01:21:11.608100 containerd[1611]: time="2026-01-14T01:21:11.608039574Z" level=info msg="CreateContainer within sandbox \"a7937b2a5e8d4910c448b34b42dcc8e43fe32f1c83ccc6d9b00b2bb1e2e707e3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e0b909388b4b4c144f7d656af3a95ca9d4900e27a9a0e541a92d7159b969a1b9\"" Jan 14 01:21:11.608635 containerd[1611]: time="2026-01-14T01:21:11.608497142Z" level=info msg="StartContainer for \"e0b909388b4b4c144f7d656af3a95ca9d4900e27a9a0e541a92d7159b969a1b9\"" Jan 14 01:21:11.609613 containerd[1611]: time="2026-01-14T01:21:11.609466551Z" level=info msg="connecting to shim e0b909388b4b4c144f7d656af3a95ca9d4900e27a9a0e541a92d7159b969a1b9" address="unix:///run/containerd/s/94e2c1195056bccaca99d1f4a4093066a3864e03498b9c04125a1e2242b7961d" protocol=ttrpc version=3 Jan 14 01:21:11.609861 systemd[1]: Started cri-containerd-6bb3f53950ae9ec0aa796e9bbf87325a07ec3acd9cbcb48f1a945f3cf8b8a945.scope - libcontainer container 6bb3f53950ae9ec0aa796e9bbf87325a07ec3acd9cbcb48f1a945f3cf8b8a945. Jan 14 01:21:11.626869 systemd[1]: Started cri-containerd-ab4725c62e2912493fce17ca5b0b000e9f4f70695fd4498b870fde22e032902e.scope - libcontainer container ab4725c62e2912493fce17ca5b0b000e9f4f70695fd4498b870fde22e032902e. Jan 14 01:21:11.640142 kubelet[2409]: I0114 01:21:11.639395 2409 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 01:21:11.640219 systemd[1]: Started cri-containerd-e0b909388b4b4c144f7d656af3a95ca9d4900e27a9a0e541a92d7159b969a1b9.scope - libcontainer container e0b909388b4b4c144f7d656af3a95ca9d4900e27a9a0e541a92d7159b969a1b9. Jan 14 01:21:11.640891 kubelet[2409]: E0114 01:21:11.640822 2409 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Jan 14 01:21:11.643000 audit: BPF prog-id=98 op=LOAD Jan 14 01:21:11.643000 audit: BPF prog-id=99 op=LOAD Jan 14 01:21:11.643000 audit[2590]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=2459 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:11.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662623366353339353061653965633061613739366539626266383733 Jan 14 01:21:11.643000 audit: BPF prog-id=99 op=UNLOAD Jan 14 01:21:11.643000 audit[2590]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2459 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:11.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662623366353339353061653965633061613739366539626266383733 Jan 14 01:21:11.644000 audit: BPF prog-id=100 op=LOAD Jan 14 01:21:11.644000 audit[2590]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=2459 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:11.644000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662623366353339353061653965633061613739366539626266383733 Jan 14 01:21:11.644000 audit: BPF prog-id=101 op=LOAD Jan 14 01:21:11.644000 audit[2590]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=2459 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:11.644000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662623366353339353061653965633061613739366539626266383733 Jan 14 01:21:11.644000 audit: BPF prog-id=101 op=UNLOAD Jan 14 01:21:11.644000 audit[2590]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2459 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:11.644000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662623366353339353061653965633061613739366539626266383733 Jan 14 01:21:11.644000 audit: BPF prog-id=100 op=UNLOAD Jan 14 01:21:11.644000 audit[2590]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2459 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:11.644000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662623366353339353061653965633061613739366539626266383733 Jan 14 01:21:11.644000 audit: BPF prog-id=102 op=LOAD Jan 14 01:21:11.644000 audit[2590]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=2459 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:11.644000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662623366353339353061653965633061613739366539626266383733 Jan 14 01:21:11.659000 audit: BPF prog-id=103 op=LOAD Jan 14 01:21:11.660000 audit: BPF prog-id=104 op=LOAD Jan 14 01:21:11.660000 audit[2602]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2478 pid=2602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:11.660000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162343732356336326532393132343933666365313763613562306230 Jan 14 01:21:11.660000 audit: BPF prog-id=104 op=UNLOAD Jan 14 01:21:11.660000 audit[2602]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2478 pid=2602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:11.660000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162343732356336326532393132343933666365313763613562306230 Jan 14 01:21:11.660000 audit: BPF prog-id=105 op=LOAD Jan 14 01:21:11.660000 audit[2602]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2478 pid=2602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:11.660000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162343732356336326532393132343933666365313763613562306230 Jan 14 01:21:11.660000 audit: BPF prog-id=106 op=LOAD Jan 14 01:21:11.660000 audit[2602]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2478 pid=2602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:11.660000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162343732356336326532393132343933666365313763613562306230 Jan 14 01:21:11.660000 audit: BPF prog-id=106 op=UNLOAD Jan 14 01:21:11.660000 audit[2602]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2478 pid=2602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:11.660000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162343732356336326532393132343933666365313763613562306230 Jan 14 01:21:11.660000 audit: BPF prog-id=105 op=UNLOAD Jan 14 01:21:11.660000 audit[2602]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2478 pid=2602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:11.660000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162343732356336326532393132343933666365313763613562306230 Jan 14 01:21:11.660000 audit: BPF prog-id=107 op=LOAD Jan 14 01:21:11.660000 audit[2602]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2478 pid=2602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:11.660000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162343732356336326532393132343933666365313763613562306230 Jan 14 01:21:11.662000 audit: BPF prog-id=108 op=LOAD Jan 14 01:21:11.664000 audit: BPF prog-id=109 op=LOAD Jan 14 01:21:11.664000 audit[2614]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2487 pid=2614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:11.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530623930393338386234623463313434663764363536616633613935 Jan 14 01:21:11.664000 audit: BPF prog-id=109 op=UNLOAD Jan 14 01:21:11.664000 audit[2614]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2487 pid=2614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:11.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530623930393338386234623463313434663764363536616633613935 Jan 14 01:21:11.665000 audit: BPF prog-id=110 op=LOAD Jan 14 01:21:11.665000 audit[2614]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2487 pid=2614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:11.665000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530623930393338386234623463313434663764363536616633613935 Jan 14 01:21:11.665000 audit: BPF prog-id=111 op=LOAD Jan 14 01:21:11.665000 audit[2614]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2487 pid=2614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:11.665000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530623930393338386234623463313434663764363536616633613935 Jan 14 01:21:11.666000 audit: BPF prog-id=111 op=UNLOAD Jan 14 01:21:11.666000 audit[2614]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2487 pid=2614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:11.666000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530623930393338386234623463313434663764363536616633613935 Jan 14 01:21:11.666000 audit: BPF prog-id=110 op=UNLOAD Jan 14 01:21:11.666000 audit[2614]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2487 pid=2614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:11.666000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530623930393338386234623463313434663764363536616633613935 Jan 14 01:21:11.667000 audit: BPF prog-id=112 op=LOAD Jan 14 01:21:11.667000 audit[2614]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2487 pid=2614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:11.667000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530623930393338386234623463313434663764363536616633613935 Jan 14 01:21:11.697769 kubelet[2409]: E0114 01:21:11.697492 2409 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 14 01:21:11.702755 containerd[1611]: time="2026-01-14T01:21:11.701232558Z" level=info msg="StartContainer for \"6bb3f53950ae9ec0aa796e9bbf87325a07ec3acd9cbcb48f1a945f3cf8b8a945\" returns successfully" Jan 14 01:21:11.717695 containerd[1611]: time="2026-01-14T01:21:11.717662380Z" level=info msg="StartContainer for \"ab4725c62e2912493fce17ca5b0b000e9f4f70695fd4498b870fde22e032902e\" returns successfully" Jan 14 01:21:11.745390 containerd[1611]: time="2026-01-14T01:21:11.744963217Z" level=info msg="StartContainer for \"e0b909388b4b4c144f7d656af3a95ca9d4900e27a9a0e541a92d7159b969a1b9\" returns successfully" Jan 14 01:21:11.915421 kubelet[2409]: E0114 01:21:11.915349 2409 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 01:21:11.917080 kubelet[2409]: E0114 01:21:11.917002 2409 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:21:11.928660 kubelet[2409]: E0114 01:21:11.928600 2409 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 01:21:11.928800 kubelet[2409]: E0114 01:21:11.928754 2409 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:21:11.933106 kubelet[2409]: E0114 01:21:11.933051 2409 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 01:21:11.933653 kubelet[2409]: E0114 01:21:11.933201 2409 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:21:12.444349 kubelet[2409]: I0114 01:21:12.444246 2409 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 01:21:12.936395 kubelet[2409]: E0114 01:21:12.936306 2409 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 01:21:12.938733 kubelet[2409]: E0114 01:21:12.938666 2409 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:21:12.939818 kubelet[2409]: E0114 01:21:12.939733 2409 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 01:21:12.939994 kubelet[2409]: E0114 01:21:12.939904 2409 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:21:13.121373 kubelet[2409]: E0114 01:21:13.121277 2409 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 14 01:21:13.182907 kubelet[2409]: E0114 01:21:13.182878 2409 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 01:21:13.183137 kubelet[2409]: E0114 01:21:13.183028 2409 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:21:13.217271 kubelet[2409]: I0114 01:21:13.217162 2409 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 14 01:21:13.217271 kubelet[2409]: E0114 01:21:13.217194 2409 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 14 01:21:13.264417 kubelet[2409]: I0114 01:21:13.264334 2409 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 14 01:21:13.271092 kubelet[2409]: E0114 01:21:13.271045 2409 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 14 01:21:13.271092 kubelet[2409]: I0114 01:21:13.271078 2409 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 14 01:21:13.273002 kubelet[2409]: E0114 01:21:13.272921 2409 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 14 01:21:13.273002 kubelet[2409]: I0114 01:21:13.272938 2409 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 14 01:21:13.274837 kubelet[2409]: E0114 01:21:13.274773 2409 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 14 01:21:13.644964 kubelet[2409]: I0114 01:21:13.644867 2409 apiserver.go:52] "Watching apiserver" Jan 14 01:21:13.664090 kubelet[2409]: I0114 01:21:13.663877 2409 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 14 01:21:15.538910 systemd[1]: Reload requested from client PID 2697 ('systemctl') (unit session-8.scope)... Jan 14 01:21:15.538978 systemd[1]: Reloading... Jan 14 01:21:15.642606 zram_generator::config[2743]: No configuration found. Jan 14 01:21:15.879935 systemd[1]: Reloading finished in 340 ms. Jan 14 01:21:15.912981 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:21:15.913490 kubelet[2409]: I0114 01:21:15.913396 2409 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 01:21:15.913884 kubelet[2409]: E0114 01:21:15.913498 2409 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{localhost.188a744b5896e2d1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-14 01:21:10.653993681 +0000 UTC m=+0.557091655,LastTimestamp:2026-01-14 01:21:10.653993681 +0000 UTC m=+0.557091655,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 14 01:21:15.936836 systemd[1]: kubelet.service: Deactivated successfully. Jan 14 01:21:15.937299 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:21:15.937421 systemd[1]: kubelet.service: Consumed 1.166s CPU time, 130.6M memory peak. Jan 14 01:21:15.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:21:15.939813 kernel: kauditd_printk_skb: 159 callbacks suppressed Jan 14 01:21:15.939871 kernel: audit: type=1131 audit(1768353675.935:392): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:21:15.940221 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:21:15.940000 audit: BPF prog-id=113 op=LOAD Jan 14 01:21:15.951356 kernel: audit: type=1334 audit(1768353675.940:393): prog-id=113 op=LOAD Jan 14 01:21:15.951398 kernel: audit: type=1334 audit(1768353675.940:394): prog-id=63 op=UNLOAD Jan 14 01:21:15.940000 audit: BPF prog-id=63 op=UNLOAD Jan 14 01:21:15.954487 kernel: audit: type=1334 audit(1768353675.940:395): prog-id=114 op=LOAD Jan 14 01:21:15.940000 audit: BPF prog-id=114 op=LOAD Jan 14 01:21:15.957344 kernel: audit: type=1334 audit(1768353675.940:396): prog-id=115 op=LOAD Jan 14 01:21:15.940000 audit: BPF prog-id=115 op=LOAD Jan 14 01:21:15.960193 kernel: audit: type=1334 audit(1768353675.940:397): prog-id=64 op=UNLOAD Jan 14 01:21:15.940000 audit: BPF prog-id=64 op=UNLOAD Jan 14 01:21:15.963099 kernel: audit: type=1334 audit(1768353675.940:398): prog-id=65 op=UNLOAD Jan 14 01:21:15.940000 audit: BPF prog-id=65 op=UNLOAD Jan 14 01:21:15.965956 kernel: audit: type=1334 audit(1768353675.942:399): prog-id=116 op=LOAD Jan 14 01:21:15.942000 audit: BPF prog-id=116 op=LOAD Jan 14 01:21:15.968809 kernel: audit: type=1334 audit(1768353675.942:400): prog-id=67 op=UNLOAD Jan 14 01:21:15.942000 audit: BPF prog-id=67 op=UNLOAD Jan 14 01:21:15.971920 kernel: audit: type=1334 audit(1768353675.942:401): prog-id=117 op=LOAD Jan 14 01:21:15.942000 audit: BPF prog-id=117 op=LOAD Jan 14 01:21:15.942000 audit: BPF prog-id=118 op=LOAD Jan 14 01:21:15.942000 audit: BPF prog-id=68 op=UNLOAD Jan 14 01:21:15.942000 audit: BPF prog-id=69 op=UNLOAD Jan 14 01:21:15.943000 audit: BPF prog-id=119 op=LOAD Jan 14 01:21:15.943000 audit: BPF prog-id=73 op=UNLOAD Jan 14 01:21:15.943000 audit: BPF prog-id=120 op=LOAD Jan 14 01:21:15.944000 audit: BPF prog-id=121 op=LOAD Jan 14 01:21:15.944000 audit: BPF prog-id=74 op=UNLOAD Jan 14 01:21:15.944000 audit: BPF prog-id=75 op=UNLOAD Jan 14 01:21:15.945000 audit: BPF prog-id=122 op=LOAD Jan 14 01:21:15.945000 audit: BPF prog-id=70 op=UNLOAD Jan 14 01:21:15.945000 audit: BPF prog-id=123 op=LOAD Jan 14 01:21:15.945000 audit: BPF prog-id=66 op=UNLOAD Jan 14 01:21:15.947000 audit: BPF prog-id=124 op=LOAD Jan 14 01:21:15.947000 audit: BPF prog-id=79 op=UNLOAD Jan 14 01:21:15.981000 audit: BPF prog-id=125 op=LOAD Jan 14 01:21:15.981000 audit: BPF prog-id=80 op=UNLOAD Jan 14 01:21:15.981000 audit: BPF prog-id=126 op=LOAD Jan 14 01:21:15.981000 audit: BPF prog-id=127 op=LOAD Jan 14 01:21:15.981000 audit: BPF prog-id=81 op=UNLOAD Jan 14 01:21:15.981000 audit: BPF prog-id=82 op=UNLOAD Jan 14 01:21:15.983000 audit: BPF prog-id=128 op=LOAD Jan 14 01:21:15.983000 audit: BPF prog-id=76 op=UNLOAD Jan 14 01:21:15.983000 audit: BPF prog-id=129 op=LOAD Jan 14 01:21:15.983000 audit: BPF prog-id=130 op=LOAD Jan 14 01:21:15.983000 audit: BPF prog-id=77 op=UNLOAD Jan 14 01:21:15.983000 audit: BPF prog-id=78 op=UNLOAD Jan 14 01:21:15.983000 audit: BPF prog-id=131 op=LOAD Jan 14 01:21:15.983000 audit: BPF prog-id=132 op=LOAD Jan 14 01:21:15.983000 audit: BPF prog-id=71 op=UNLOAD Jan 14 01:21:15.983000 audit: BPF prog-id=72 op=UNLOAD Jan 14 01:21:16.190168 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:21:16.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:21:16.206079 (kubelet)[2787]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 01:21:16.284987 kubelet[2787]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 01:21:16.285349 kubelet[2787]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 01:21:16.285349 kubelet[2787]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 01:21:16.285349 kubelet[2787]: I0114 01:21:16.285284 2787 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 01:21:16.299123 kubelet[2787]: I0114 01:21:16.299032 2787 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 14 01:21:16.299123 kubelet[2787]: I0114 01:21:16.299104 2787 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 01:21:16.299382 kubelet[2787]: I0114 01:21:16.299342 2787 server.go:956] "Client rotation is on, will bootstrap in background" Jan 14 01:21:16.300788 kubelet[2787]: I0114 01:21:16.300749 2787 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 14 01:21:16.303630 kubelet[2787]: I0114 01:21:16.303300 2787 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 01:21:16.315625 kubelet[2787]: I0114 01:21:16.315602 2787 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 01:21:16.324972 kubelet[2787]: I0114 01:21:16.324867 2787 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 01:21:16.325553 kubelet[2787]: I0114 01:21:16.325468 2787 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 01:21:16.325944 kubelet[2787]: I0114 01:21:16.325758 2787 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 01:21:16.325944 kubelet[2787]: I0114 01:21:16.325942 2787 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 01:21:16.326079 kubelet[2787]: I0114 01:21:16.325954 2787 container_manager_linux.go:303] "Creating device plugin manager" Jan 14 01:21:16.326079 kubelet[2787]: I0114 01:21:16.326000 2787 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:21:16.326234 kubelet[2787]: I0114 01:21:16.326190 2787 kubelet.go:480] "Attempting to sync node with API server" Jan 14 01:21:16.326234 kubelet[2787]: I0114 01:21:16.326204 2787 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 01:21:16.326234 kubelet[2787]: I0114 01:21:16.326225 2787 kubelet.go:386] "Adding apiserver pod source" Jan 14 01:21:16.326348 kubelet[2787]: I0114 01:21:16.326239 2787 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 01:21:16.328734 kubelet[2787]: I0114 01:21:16.328623 2787 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 01:21:16.332373 kubelet[2787]: I0114 01:21:16.331598 2787 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 14 01:21:16.341659 kubelet[2787]: I0114 01:21:16.341493 2787 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 14 01:21:16.341659 kubelet[2787]: I0114 01:21:16.341647 2787 server.go:1289] "Started kubelet" Jan 14 01:21:16.345144 kubelet[2787]: I0114 01:21:16.344869 2787 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 01:21:16.346559 kubelet[2787]: I0114 01:21:16.346133 2787 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 01:21:16.346774 kubelet[2787]: I0114 01:21:16.345012 2787 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 01:21:16.349582 kubelet[2787]: I0114 01:21:16.349000 2787 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 01:21:16.350439 kubelet[2787]: I0114 01:21:16.350328 2787 server.go:317] "Adding debug handlers to kubelet server" Jan 14 01:21:16.351733 kubelet[2787]: E0114 01:21:16.351705 2787 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 01:21:16.353634 kubelet[2787]: I0114 01:21:16.352100 2787 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 01:21:16.353634 kubelet[2787]: I0114 01:21:16.352117 2787 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 14 01:21:16.353856 kubelet[2787]: I0114 01:21:16.352109 2787 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 14 01:21:16.354057 kubelet[2787]: I0114 01:21:16.353959 2787 reconciler.go:26] "Reconciler: start to sync state" Jan 14 01:21:16.357046 kubelet[2787]: I0114 01:21:16.356924 2787 factory.go:223] Registration of the systemd container factory successfully Jan 14 01:21:16.358408 kubelet[2787]: I0114 01:21:16.357608 2787 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 01:21:16.361018 kubelet[2787]: I0114 01:21:16.360922 2787 factory.go:223] Registration of the containerd container factory successfully Jan 14 01:21:16.388789 kubelet[2787]: I0114 01:21:16.388734 2787 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 14 01:21:16.391729 kubelet[2787]: I0114 01:21:16.391670 2787 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 14 01:21:16.391792 kubelet[2787]: I0114 01:21:16.391750 2787 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 14 01:21:16.391792 kubelet[2787]: I0114 01:21:16.391771 2787 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 01:21:16.391792 kubelet[2787]: I0114 01:21:16.391779 2787 kubelet.go:2436] "Starting kubelet main sync loop" Jan 14 01:21:16.392196 kubelet[2787]: E0114 01:21:16.391821 2787 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 01:21:16.426937 kubelet[2787]: I0114 01:21:16.426787 2787 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 01:21:16.426937 kubelet[2787]: I0114 01:21:16.426829 2787 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 01:21:16.426937 kubelet[2787]: I0114 01:21:16.426847 2787 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:21:16.427063 kubelet[2787]: I0114 01:21:16.426956 2787 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 14 01:21:16.427063 kubelet[2787]: I0114 01:21:16.426965 2787 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 14 01:21:16.427063 kubelet[2787]: I0114 01:21:16.426980 2787 policy_none.go:49] "None policy: Start" Jan 14 01:21:16.427063 kubelet[2787]: I0114 01:21:16.426990 2787 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 14 01:21:16.427063 kubelet[2787]: I0114 01:21:16.427000 2787 state_mem.go:35] "Initializing new in-memory state store" Jan 14 01:21:16.427167 kubelet[2787]: I0114 01:21:16.427078 2787 state_mem.go:75] "Updated machine memory state" Jan 14 01:21:16.434751 kubelet[2787]: E0114 01:21:16.434730 2787 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 14 01:21:16.435646 kubelet[2787]: I0114 01:21:16.435185 2787 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 01:21:16.435646 kubelet[2787]: I0114 01:21:16.435198 2787 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 01:21:16.436220 kubelet[2787]: I0114 01:21:16.436162 2787 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 01:21:16.440637 kubelet[2787]: E0114 01:21:16.439029 2787 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 01:21:16.493022 kubelet[2787]: I0114 01:21:16.492848 2787 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 14 01:21:16.493022 kubelet[2787]: I0114 01:21:16.493004 2787 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 14 01:21:16.493730 kubelet[2787]: I0114 01:21:16.493711 2787 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 14 01:21:16.545799 kubelet[2787]: I0114 01:21:16.545700 2787 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 01:21:16.556015 kubelet[2787]: I0114 01:21:16.555919 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 01:21:16.556015 kubelet[2787]: I0114 01:21:16.555975 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 01:21:16.556015 kubelet[2787]: I0114 01:21:16.555997 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 01:21:16.556015 kubelet[2787]: I0114 01:21:16.556017 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9807b6bdc071373d0a5826ec3470e31f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9807b6bdc071373d0a5826ec3470e31f\") " pod="kube-system/kube-apiserver-localhost" Jan 14 01:21:16.556217 kubelet[2787]: I0114 01:21:16.556031 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9807b6bdc071373d0a5826ec3470e31f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9807b6bdc071373d0a5826ec3470e31f\") " pod="kube-system/kube-apiserver-localhost" Jan 14 01:21:16.556217 kubelet[2787]: I0114 01:21:16.556044 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 01:21:16.556217 kubelet[2787]: I0114 01:21:16.556055 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 01:21:16.556217 kubelet[2787]: I0114 01:21:16.556067 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 14 01:21:16.556217 kubelet[2787]: I0114 01:21:16.556078 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9807b6bdc071373d0a5826ec3470e31f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9807b6bdc071373d0a5826ec3470e31f\") " pod="kube-system/kube-apiserver-localhost" Jan 14 01:21:16.557845 kubelet[2787]: I0114 01:21:16.557672 2787 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 14 01:21:16.557845 kubelet[2787]: I0114 01:21:16.557740 2787 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 14 01:21:16.802079 kubelet[2787]: E0114 01:21:16.801978 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:21:16.802834 kubelet[2787]: E0114 01:21:16.802733 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:21:16.803673 kubelet[2787]: E0114 01:21:16.803631 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:21:17.328579 kubelet[2787]: I0114 01:21:17.328032 2787 apiserver.go:52] "Watching apiserver" Jan 14 01:21:17.353372 kubelet[2787]: I0114 01:21:17.353283 2787 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 14 01:21:17.410993 kubelet[2787]: E0114 01:21:17.410890 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:21:17.410993 kubelet[2787]: I0114 01:21:17.410992 2787 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 14 01:21:17.412579 kubelet[2787]: I0114 01:21:17.411245 2787 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 14 01:21:17.424950 kubelet[2787]: E0114 01:21:17.424861 2787 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 14 01:21:17.425327 kubelet[2787]: E0114 01:21:17.425103 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:21:17.425764 kubelet[2787]: E0114 01:21:17.425397 2787 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 14 01:21:17.425764 kubelet[2787]: E0114 01:21:17.425483 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:21:17.452964 kubelet[2787]: I0114 01:21:17.452172 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.452152712 podStartE2EDuration="1.452152712s" podCreationTimestamp="2026-01-14 01:21:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:21:17.441195033 +0000 UTC m=+1.218762098" watchObservedRunningTime="2026-01-14 01:21:17.452152712 +0000 UTC m=+1.229719787" Jan 14 01:21:17.452964 kubelet[2787]: I0114 01:21:17.452311 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.452305911 podStartE2EDuration="1.452305911s" podCreationTimestamp="2026-01-14 01:21:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:21:17.451753865 +0000 UTC m=+1.229320920" watchObservedRunningTime="2026-01-14 01:21:17.452305911 +0000 UTC m=+1.229872966" Jan 14 01:21:17.463046 kubelet[2787]: I0114 01:21:17.462964 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.462951954 podStartE2EDuration="1.462951954s" podCreationTimestamp="2026-01-14 01:21:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:21:17.462773295 +0000 UTC m=+1.240340350" watchObservedRunningTime="2026-01-14 01:21:17.462951954 +0000 UTC m=+1.240519029" Jan 14 01:21:18.412391 kubelet[2787]: E0114 01:21:18.412315 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:21:18.413451 kubelet[2787]: E0114 01:21:18.413364 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:21:19.244385 kubelet[2787]: E0114 01:21:19.244293 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:21:20.808218 kubelet[2787]: I0114 01:21:20.808120 2787 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 14 01:21:20.808884 containerd[1611]: time="2026-01-14T01:21:20.808776470Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 14 01:21:20.809243 kubelet[2787]: I0114 01:21:20.809032 2787 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 14 01:21:21.964955 systemd[1]: Created slice kubepods-besteffort-pod86530971_90e5_468b_a99c_ec5a9b710c43.slice - libcontainer container kubepods-besteffort-pod86530971_90e5_468b_a99c_ec5a9b710c43.slice. Jan 14 01:21:21.999234 kubelet[2787]: I0114 01:21:21.999120 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86530971-90e5-468b-a99c-ec5a9b710c43-xtables-lock\") pod \"kube-proxy-tdjbx\" (UID: \"86530971-90e5-468b-a99c-ec5a9b710c43\") " pod="kube-system/kube-proxy-tdjbx" Jan 14 01:21:21.999234 kubelet[2787]: I0114 01:21:21.999221 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86530971-90e5-468b-a99c-ec5a9b710c43-lib-modules\") pod \"kube-proxy-tdjbx\" (UID: \"86530971-90e5-468b-a99c-ec5a9b710c43\") " pod="kube-system/kube-proxy-tdjbx" Jan 14 01:21:21.999703 kubelet[2787]: I0114 01:21:21.999241 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/86530971-90e5-468b-a99c-ec5a9b710c43-kube-proxy\") pod \"kube-proxy-tdjbx\" (UID: \"86530971-90e5-468b-a99c-ec5a9b710c43\") " pod="kube-system/kube-proxy-tdjbx" Jan 14 01:21:21.999703 kubelet[2787]: I0114 01:21:21.999307 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj5tv\" (UniqueName: \"kubernetes.io/projected/86530971-90e5-468b-a99c-ec5a9b710c43-kube-api-access-qj5tv\") pod \"kube-proxy-tdjbx\" (UID: \"86530971-90e5-468b-a99c-ec5a9b710c43\") " pod="kube-system/kube-proxy-tdjbx" Jan 14 01:21:22.104070 systemd[1]: Created slice kubepods-besteffort-poddedb411e_7477_4ddb_942a_81585913d8f2.slice - libcontainer container kubepods-besteffort-poddedb411e_7477_4ddb_942a_81585913d8f2.slice. Jan 14 01:21:22.201226 kubelet[2787]: I0114 01:21:22.201024 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4w24\" (UniqueName: \"kubernetes.io/projected/dedb411e-7477-4ddb-942a-81585913d8f2-kube-api-access-z4w24\") pod \"tigera-operator-7dcd859c48-zd5kz\" (UID: \"dedb411e-7477-4ddb-942a-81585913d8f2\") " pod="tigera-operator/tigera-operator-7dcd859c48-zd5kz" Jan 14 01:21:22.201226 kubelet[2787]: I0114 01:21:22.201095 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dedb411e-7477-4ddb-942a-81585913d8f2-var-lib-calico\") pod \"tigera-operator-7dcd859c48-zd5kz\" (UID: \"dedb411e-7477-4ddb-942a-81585913d8f2\") " pod="tigera-operator/tigera-operator-7dcd859c48-zd5kz" Jan 14 01:21:22.287437 kubelet[2787]: E0114 01:21:22.287266 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:21:22.288450 containerd[1611]: time="2026-01-14T01:21:22.288223308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tdjbx,Uid:86530971-90e5-468b-a99c-ec5a9b710c43,Namespace:kube-system,Attempt:0,}" Jan 14 01:21:22.350002 containerd[1611]: time="2026-01-14T01:21:22.349835374Z" level=info msg="connecting to shim e6ff4856f120e9d26c4bb942bf43ff6a0f7581993b85509825c543904b5a8583" address="unix:///run/containerd/s/2511e94bc82dbb7818d8440c154fc3ec58e5ce1e8c4f4b08d8e34792bf0086b6" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:21:22.409402 containerd[1611]: time="2026-01-14T01:21:22.409333424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-zd5kz,Uid:dedb411e-7477-4ddb-942a-81585913d8f2,Namespace:tigera-operator,Attempt:0,}" Jan 14 01:21:22.425997 systemd[1]: Started cri-containerd-e6ff4856f120e9d26c4bb942bf43ff6a0f7581993b85509825c543904b5a8583.scope - libcontainer container e6ff4856f120e9d26c4bb942bf43ff6a0f7581993b85509825c543904b5a8583. Jan 14 01:21:22.452373 containerd[1611]: time="2026-01-14T01:21:22.452314838Z" level=info msg="connecting to shim 8cb1fe474e13fed4a90a9b1407c2876de422d3f0021a9a4087c746fb026f5cac" address="unix:///run/containerd/s/1f7e58231dc7f30312de783418e54ea27d59dfda791260d49ded379d204c716e" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:21:22.455000 audit: BPF prog-id=133 op=LOAD Jan 14 01:21:22.462024 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 14 01:21:22.462101 kernel: audit: type=1334 audit(1768353682.455:434): prog-id=133 op=LOAD Jan 14 01:21:22.456000 audit: BPF prog-id=134 op=LOAD Jan 14 01:21:22.469567 kernel: audit: type=1334 audit(1768353682.456:435): prog-id=134 op=LOAD Jan 14 01:21:22.469636 kernel: audit: type=1300 audit(1768353682.456:435): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2856 pid=2867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:22.456000 audit[2867]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2856 pid=2867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:22.456000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536666634383536663132306539643236633462623934326266343366 Jan 14 01:21:22.506056 kernel: audit: type=1327 audit(1768353682.456:435): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536666634383536663132306539643236633462623934326266343366 Jan 14 01:21:22.506214 kernel: audit: type=1334 audit(1768353682.457:436): prog-id=134 op=UNLOAD Jan 14 01:21:22.457000 audit: BPF prog-id=134 op=UNLOAD Jan 14 01:21:22.457000 audit[2867]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2856 pid=2867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:22.517121 containerd[1611]: time="2026-01-14T01:21:22.517071656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tdjbx,Uid:86530971-90e5-468b-a99c-ec5a9b710c43,Namespace:kube-system,Attempt:0,} returns sandbox id \"e6ff4856f120e9d26c4bb942bf43ff6a0f7581993b85509825c543904b5a8583\"" Jan 14 01:21:22.518104 kubelet[2787]: E0114 01:21:22.517930 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:21:22.525174 containerd[1611]: time="2026-01-14T01:21:22.524937760Z" level=info msg="CreateContainer within sandbox \"e6ff4856f120e9d26c4bb942bf43ff6a0f7581993b85509825c543904b5a8583\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 14 01:21:22.529699 kernel: audit: type=1300 audit(1768353682.457:436): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2856 pid=2867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:22.530718 kernel: audit: type=1327 audit(1768353682.457:436): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536666634383536663132306539643236633462623934326266343366 Jan 14 01:21:22.457000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536666634383536663132306539643236633462623934326266343366 Jan 14 01:21:22.457000 audit: BPF prog-id=135 op=LOAD Jan 14 01:21:22.548936 systemd[1]: Started cri-containerd-8cb1fe474e13fed4a90a9b1407c2876de422d3f0021a9a4087c746fb026f5cac.scope - libcontainer container 8cb1fe474e13fed4a90a9b1407c2876de422d3f0021a9a4087c746fb026f5cac. Jan 14 01:21:22.550052 kernel: audit: type=1334 audit(1768353682.457:437): prog-id=135 op=LOAD Jan 14 01:21:22.457000 audit[2867]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2856 pid=2867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:22.565624 kernel: audit: type=1300 audit(1768353682.457:437): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2856 pid=2867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:22.566384 containerd[1611]: time="2026-01-14T01:21:22.566301897Z" level=info msg="Container 04784f3cb7ddf9d21bf22f3bbbc400f405157a39ad90a876e1605288e3c1a047: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:21:22.457000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536666634383536663132306539643236633462623934326266343366 Jan 14 01:21:22.579648 containerd[1611]: time="2026-01-14T01:21:22.579500240Z" level=info msg="CreateContainer within sandbox \"e6ff4856f120e9d26c4bb942bf43ff6a0f7581993b85509825c543904b5a8583\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"04784f3cb7ddf9d21bf22f3bbbc400f405157a39ad90a876e1605288e3c1a047\"" Jan 14 01:21:22.581581 containerd[1611]: time="2026-01-14T01:21:22.581049125Z" level=info msg="StartContainer for \"04784f3cb7ddf9d21bf22f3bbbc400f405157a39ad90a876e1605288e3c1a047\"" Jan 14 01:21:22.582824 containerd[1611]: time="2026-01-14T01:21:22.582801976Z" level=info msg="connecting to shim 04784f3cb7ddf9d21bf22f3bbbc400f405157a39ad90a876e1605288e3c1a047" address="unix:///run/containerd/s/2511e94bc82dbb7818d8440c154fc3ec58e5ce1e8c4f4b08d8e34792bf0086b6" protocol=ttrpc version=3 Jan 14 01:21:22.457000 audit: BPF prog-id=136 op=LOAD Jan 14 01:21:22.457000 audit[2867]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2856 pid=2867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:22.457000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536666634383536663132306539643236633462623934326266343366 Jan 14 01:21:22.457000 audit: BPF prog-id=136 op=UNLOAD Jan 14 01:21:22.457000 audit[2867]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2856 pid=2867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:22.457000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536666634383536663132306539643236633462623934326266343366 Jan 14 01:21:22.457000 audit: BPF prog-id=135 op=UNLOAD Jan 14 01:21:22.457000 audit[2867]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2856 pid=2867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:22.584710 kernel: audit: type=1327 audit(1768353682.457:437): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536666634383536663132306539643236633462623934326266343366 Jan 14 01:21:22.457000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536666634383536663132306539643236633462623934326266343366 Jan 14 01:21:22.457000 audit: BPF prog-id=137 op=LOAD Jan 14 01:21:22.457000 audit[2867]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2856 pid=2867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:22.457000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536666634383536663132306539643236633462623934326266343366 Jan 14 01:21:22.566000 audit: BPF prog-id=138 op=LOAD Jan 14 01:21:22.567000 audit: BPF prog-id=139 op=LOAD Jan 14 01:21:22.567000 audit[2906]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2893 pid=2906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:22.567000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863623166653437346531336665643461393061396231343037633238 Jan 14 01:21:22.567000 audit: BPF prog-id=139 op=UNLOAD Jan 14 01:21:22.567000 audit[2906]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2893 pid=2906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:22.567000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863623166653437346531336665643461393061396231343037633238 Jan 14 01:21:22.567000 audit: BPF prog-id=140 op=LOAD Jan 14 01:21:22.567000 audit[2906]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2893 pid=2906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:22.567000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863623166653437346531336665643461393061396231343037633238 Jan 14 01:21:22.567000 audit: BPF prog-id=141 op=LOAD Jan 14 01:21:22.567000 audit[2906]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2893 pid=2906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:22.567000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863623166653437346531336665643461393061396231343037633238 Jan 14 01:21:22.567000 audit: BPF prog-id=141 op=UNLOAD Jan 14 01:21:22.567000 audit[2906]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2893 pid=2906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:22.567000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863623166653437346531336665643461393061396231343037633238 Jan 14 01:21:22.568000 audit: BPF prog-id=140 op=UNLOAD Jan 14 01:21:22.568000 audit[2906]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2893 pid=2906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:22.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863623166653437346531336665643461393061396231343037633238 Jan 14 01:21:22.568000 audit: BPF prog-id=142 op=LOAD Jan 14 01:21:22.568000 audit[2906]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2893 pid=2906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:22.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863623166653437346531336665643461393061396231343037633238 Jan 14 01:21:22.611752 systemd[1]: Started cri-containerd-04784f3cb7ddf9d21bf22f3bbbc400f405157a39ad90a876e1605288e3c1a047.scope - libcontainer container 04784f3cb7ddf9d21bf22f3bbbc400f405157a39ad90a876e1605288e3c1a047. Jan 14 01:21:22.647945 containerd[1611]: time="2026-01-14T01:21:22.647717275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-zd5kz,Uid:dedb411e-7477-4ddb-942a-81585913d8f2,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8cb1fe474e13fed4a90a9b1407c2876de422d3f0021a9a4087c746fb026f5cac\"" Jan 14 01:21:22.654879 containerd[1611]: time="2026-01-14T01:21:22.654390597Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 14 01:21:22.690000 audit: BPF prog-id=143 op=LOAD Jan 14 01:21:22.690000 audit[2933]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2856 pid=2933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:22.690000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034373834663363623764646639643231626632326633626262633430 Jan 14 01:21:22.690000 audit: BPF prog-id=144 op=LOAD Jan 14 01:21:22.690000 audit[2933]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2856 pid=2933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:22.690000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034373834663363623764646639643231626632326633626262633430 Jan 14 01:21:22.691000 audit: BPF prog-id=144 op=UNLOAD Jan 14 01:21:22.691000 audit[2933]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2856 pid=2933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:22.691000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034373834663363623764646639643231626632326633626262633430 Jan 14 01:21:22.691000 audit: BPF prog-id=143 op=UNLOAD Jan 14 01:21:22.691000 audit[2933]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2856 pid=2933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:22.691000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034373834663363623764646639643231626632326633626262633430 Jan 14 01:21:22.691000 audit: BPF prog-id=145 op=LOAD Jan 14 01:21:22.691000 audit[2933]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2856 pid=2933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:22.691000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034373834663363623764646639643231626632326633626262633430 Jan 14 01:21:22.723433 containerd[1611]: time="2026-01-14T01:21:22.723356046Z" level=info msg="StartContainer for \"04784f3cb7ddf9d21bf22f3bbbc400f405157a39ad90a876e1605288e3c1a047\" returns successfully" Jan 14 01:21:22.939000 audit[3005]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3005 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:21:22.939000 audit[3005]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe02cfe2f0 a2=0 a3=7ffe02cfe2dc items=0 ppid=2947 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:22.939000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 01:21:22.942000 audit[3008]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=3008 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:21:22.942000 audit[3008]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff93e0ac20 a2=0 a3=7fff93e0ac0c items=0 ppid=2947 pid=3008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:22.942000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 01:21:22.942000 audit[3007]: NETFILTER_CFG table=mangle:56 family=10 entries=1 op=nft_register_chain pid=3007 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:21:22.942000 audit[3007]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffde6459b00 a2=0 a3=61ff62a5ca736735 items=0 ppid=2947 pid=3007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:22.942000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 01:21:22.944000 audit[3010]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_chain pid=3010 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:21:22.945000 audit[3011]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=3011 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:21:22.945000 audit[3011]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffed8a684b0 a2=0 a3=7ffed8a6849c items=0 ppid=2947 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:22.945000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 01:21:22.944000 audit[3010]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff7f616440 a2=0 a3=7fff7f61642c items=0 ppid=2947 pid=3010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:22.944000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 01:21:22.950000 audit[3012]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3012 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:21:22.950000 audit[3012]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc05a72470 a2=0 a3=7ffc05a7245c items=0 ppid=2947 pid=3012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:22.950000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 01:21:23.053000 audit[3016]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3016 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:21:23.053000 audit[3016]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffeb2dd7c10 a2=0 a3=7ffeb2dd7bfc items=0 ppid=2947 pid=3016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.053000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 01:21:23.059000 audit[3018]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3018 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:21:23.059000 audit[3018]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc8e8354c0 a2=0 a3=7ffc8e8354ac items=0 ppid=2947 pid=3018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.059000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 14 01:21:23.068000 audit[3021]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3021 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:21:23.068000 audit[3021]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd459f9f10 a2=0 a3=7ffd459f9efc items=0 ppid=2947 pid=3021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.068000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 14 01:21:23.070000 audit[3022]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3022 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:21:23.070000 audit[3022]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff0659dd00 a2=0 a3=7fff0659dcec items=0 ppid=2947 pid=3022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.070000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 01:21:23.075000 audit[3024]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3024 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:21:23.075000 audit[3024]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc33920560 a2=0 a3=7ffc3392054c items=0 ppid=2947 pid=3024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.075000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 01:21:23.078000 audit[3025]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3025 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:21:23.078000 audit[3025]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe78ba2780 a2=0 a3=7ffe78ba276c items=0 ppid=2947 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.078000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 01:21:23.083000 audit[3027]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3027 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:21:23.083000 audit[3027]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe7d0f1580 a2=0 a3=7ffe7d0f156c items=0 ppid=2947 pid=3027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.083000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 14 01:21:23.092000 audit[3030]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3030 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:21:23.092000 audit[3030]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd00d02bf0 a2=0 a3=7ffd00d02bdc items=0 ppid=2947 pid=3030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.092000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 14 01:21:23.095000 audit[3031]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3031 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:21:23.095000 audit[3031]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdead70bc0 a2=0 a3=7ffdead70bac items=0 ppid=2947 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.095000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 01:21:23.100000 audit[3033]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3033 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:21:23.100000 audit[3033]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc05941db0 a2=0 a3=7ffc05941d9c items=0 ppid=2947 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.100000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 01:21:23.103000 audit[3034]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3034 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:21:23.103000 audit[3034]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc8a0a75c0 a2=0 a3=7ffc8a0a75ac items=0 ppid=2947 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.103000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 01:21:23.109000 audit[3036]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3036 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:21:23.109000 audit[3036]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff830b4c00 a2=0 a3=7fff830b4bec items=0 ppid=2947 pid=3036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.109000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 01:21:23.118000 audit[3039]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3039 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:21:23.118000 audit[3039]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffff6956e30 a2=0 a3=7ffff6956e1c items=0 ppid=2947 pid=3039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.118000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 01:21:23.128000 audit[3042]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3042 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:21:23.128000 audit[3042]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffeb3f67ee0 a2=0 a3=7ffeb3f67ecc items=0 ppid=2947 pid=3042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.128000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 14 01:21:23.131000 audit[3043]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3043 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:21:23.131000 audit[3043]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff7225fe80 a2=0 a3=7fff7225fe6c items=0 ppid=2947 pid=3043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.131000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 01:21:23.138000 audit[3045]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3045 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:21:23.138000 audit[3045]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe9ed99040 a2=0 a3=7ffe9ed9902c items=0 ppid=2947 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.138000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:21:23.146000 audit[3048]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3048 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:21:23.146000 audit[3048]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc6d65f4b0 a2=0 a3=7ffc6d65f49c items=0 ppid=2947 pid=3048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.146000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:21:23.149000 audit[3049]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3049 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:21:23.149000 audit[3049]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff001f09c0 a2=0 a3=7fff001f09ac items=0 ppid=2947 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.149000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 01:21:23.155000 audit[3051]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3051 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:21:23.155000 audit[3051]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffe80455cf0 a2=0 a3=7ffe80455cdc items=0 ppid=2947 pid=3051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.155000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 01:21:23.191000 audit[3057]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3057 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:23.191000 audit[3057]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffce208d0a0 a2=0 a3=7ffce208d08c items=0 ppid=2947 pid=3057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.191000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:23.202000 audit[3057]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3057 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:23.202000 audit[3057]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffce208d0a0 a2=0 a3=7ffce208d08c items=0 ppid=2947 pid=3057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.202000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:23.205000 audit[3062]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3062 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:21:23.205000 audit[3062]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd3c74be50 a2=0 a3=7ffd3c74be3c items=0 ppid=2947 pid=3062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.205000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 01:21:23.211000 audit[3064]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3064 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:21:23.211000 audit[3064]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe7b74cb00 a2=0 a3=7ffe7b74caec items=0 ppid=2947 pid=3064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.211000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 14 01:21:23.221000 audit[3067]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3067 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:21:23.221000 audit[3067]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fffbc680f50 a2=0 a3=7fffbc680f3c items=0 ppid=2947 pid=3067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.221000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 14 01:21:23.224000 audit[3068]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3068 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:21:23.224000 audit[3068]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd98aa2b60 a2=0 a3=7ffd98aa2b4c items=0 ppid=2947 pid=3068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.224000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 01:21:23.231000 audit[3070]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3070 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:21:23.231000 audit[3070]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe825ff3b0 a2=0 a3=7ffe825ff39c items=0 ppid=2947 pid=3070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.231000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 01:21:23.234000 audit[3071]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3071 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:21:23.234000 audit[3071]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdf4f5eb90 a2=0 a3=7ffdf4f5eb7c items=0 ppid=2947 pid=3071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.234000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 01:21:23.241000 audit[3073]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3073 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:21:23.241000 audit[3073]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcaf41e2e0 a2=0 a3=7ffcaf41e2cc items=0 ppid=2947 pid=3073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.241000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 14 01:21:23.251000 audit[3076]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3076 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:21:23.251000 audit[3076]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffda3ebc510 a2=0 a3=7ffda3ebc4fc items=0 ppid=2947 pid=3076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.251000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 14 01:21:23.255000 audit[3077]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3077 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:21:23.255000 audit[3077]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe97ca97f0 a2=0 a3=7ffe97ca97dc items=0 ppid=2947 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.255000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 01:21:23.261000 audit[3079]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3079 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:21:23.261000 audit[3079]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcc5105a00 a2=0 a3=7ffcc51059ec items=0 ppid=2947 pid=3079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.261000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 01:21:23.263000 audit[3080]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3080 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:21:23.263000 audit[3080]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdd9fc4f00 a2=0 a3=7ffdd9fc4eec items=0 ppid=2947 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.263000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 01:21:23.271000 audit[3082]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3082 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:21:23.271000 audit[3082]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffee8de3a50 a2=0 a3=7ffee8de3a3c items=0 ppid=2947 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.271000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 01:21:23.281000 audit[3085]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3085 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:21:23.281000 audit[3085]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffedf6b010 a2=0 a3=7fffedf6affc items=0 ppid=2947 pid=3085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.281000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 14 01:21:23.290000 audit[3088]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3088 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:21:23.290000 audit[3088]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd708930b0 a2=0 a3=7ffd7089309c items=0 ppid=2947 pid=3088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.290000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 14 01:21:23.292000 audit[3089]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3089 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:21:23.292000 audit[3089]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdc1d88540 a2=0 a3=7ffdc1d8852c items=0 ppid=2947 pid=3089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.292000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 01:21:23.299000 audit[3091]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3091 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:21:23.299000 audit[3091]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc08d3d900 a2=0 a3=7ffc08d3d8ec items=0 ppid=2947 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.299000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:21:23.308000 audit[3094]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3094 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:21:23.308000 audit[3094]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd246da7c0 a2=0 a3=7ffd246da7ac items=0 ppid=2947 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.308000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:21:23.310000 audit[3095]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3095 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:21:23.310000 audit[3095]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc15fa33a0 a2=0 a3=7ffc15fa338c items=0 ppid=2947 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.310000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 01:21:23.316000 audit[3097]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3097 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:21:23.316000 audit[3097]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffc522840a0 a2=0 a3=7ffc5228408c items=0 ppid=2947 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.316000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 01:21:23.318000 audit[3098]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3098 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:21:23.318000 audit[3098]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd61d4eb90 a2=0 a3=7ffd61d4eb7c items=0 ppid=2947 pid=3098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.318000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 01:21:23.323000 audit[3100]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3100 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:21:23.323000 audit[3100]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffeddb3ac20 a2=0 a3=7ffeddb3ac0c items=0 ppid=2947 pid=3100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.323000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:21:23.332000 audit[3103]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3103 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:21:23.332000 audit[3103]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff380ce600 a2=0 a3=7fff380ce5ec items=0 ppid=2947 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.332000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:21:23.339000 audit[3105]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3105 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 01:21:23.339000 audit[3105]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffc69aa9fe0 a2=0 a3=7ffc69aa9fcc items=0 ppid=2947 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.339000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:23.339000 audit[3105]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3105 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 01:21:23.339000 audit[3105]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffc69aa9fe0 a2=0 a3=7ffc69aa9fcc items=0 ppid=2947 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.339000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:23.427405 kubelet[2787]: E0114 01:21:23.427186 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:21:23.442048 kubelet[2787]: I0114 01:21:23.441887 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tdjbx" podStartSLOduration=2.441871929 podStartE2EDuration="2.441871929s" podCreationTimestamp="2026-01-14 01:21:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:21:23.441111768 +0000 UTC m=+7.218678824" watchObservedRunningTime="2026-01-14 01:21:23.441871929 +0000 UTC m=+7.219438984" Jan 14 01:21:24.289641 kubelet[2787]: E0114 01:21:24.289482 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:21:24.430967 kubelet[2787]: E0114 01:21:24.430939 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:21:25.021906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2926056666.mount: Deactivated successfully. Jan 14 01:21:25.433931 kubelet[2787]: E0114 01:21:25.433801 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:21:25.781685 kubelet[2787]: E0114 01:21:25.781342 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:21:26.437109 kubelet[2787]: E0114 01:21:26.436748 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:21:27.187764 update_engine[1588]: I20260114 01:21:27.187037 1588 update_attempter.cc:509] Updating boot flags... Jan 14 01:21:27.536049 containerd[1611]: time="2026-01-14T01:21:27.535886580Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:21:27.537132 containerd[1611]: time="2026-01-14T01:21:27.537091122Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25052948" Jan 14 01:21:27.538960 containerd[1611]: time="2026-01-14T01:21:27.538884282Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:21:27.541390 containerd[1611]: time="2026-01-14T01:21:27.541312279Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:21:27.541929 containerd[1611]: time="2026-01-14T01:21:27.541867188Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 4.887439928s" Jan 14 01:21:27.541929 containerd[1611]: time="2026-01-14T01:21:27.541922175Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 14 01:21:27.548207 containerd[1611]: time="2026-01-14T01:21:27.547829149Z" level=info msg="CreateContainer within sandbox \"8cb1fe474e13fed4a90a9b1407c2876de422d3f0021a9a4087c746fb026f5cac\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 14 01:21:27.561104 containerd[1611]: time="2026-01-14T01:21:27.560985148Z" level=info msg="Container 61ee2cff49577820e9027bf8e970f0d145ef6c8152ee7ab72908043efae2ffc1: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:21:27.569489 containerd[1611]: time="2026-01-14T01:21:27.569214864Z" level=info msg="CreateContainer within sandbox \"8cb1fe474e13fed4a90a9b1407c2876de422d3f0021a9a4087c746fb026f5cac\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"61ee2cff49577820e9027bf8e970f0d145ef6c8152ee7ab72908043efae2ffc1\"" Jan 14 01:21:27.570830 containerd[1611]: time="2026-01-14T01:21:27.570490322Z" level=info msg="StartContainer for \"61ee2cff49577820e9027bf8e970f0d145ef6c8152ee7ab72908043efae2ffc1\"" Jan 14 01:21:27.572676 containerd[1611]: time="2026-01-14T01:21:27.572462720Z" level=info msg="connecting to shim 61ee2cff49577820e9027bf8e970f0d145ef6c8152ee7ab72908043efae2ffc1" address="unix:///run/containerd/s/1f7e58231dc7f30312de783418e54ea27d59dfda791260d49ded379d204c716e" protocol=ttrpc version=3 Jan 14 01:21:27.609074 systemd[1]: Started cri-containerd-61ee2cff49577820e9027bf8e970f0d145ef6c8152ee7ab72908043efae2ffc1.scope - libcontainer container 61ee2cff49577820e9027bf8e970f0d145ef6c8152ee7ab72908043efae2ffc1. Jan 14 01:21:27.631000 audit: BPF prog-id=146 op=LOAD Jan 14 01:21:27.634898 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 14 01:21:27.634967 kernel: audit: type=1334 audit(1768353687.631:506): prog-id=146 op=LOAD Jan 14 01:21:27.632000 audit: BPF prog-id=147 op=LOAD Jan 14 01:21:27.641764 kernel: audit: type=1334 audit(1768353687.632:507): prog-id=147 op=LOAD Jan 14 01:21:27.641836 kernel: audit: type=1300 audit(1768353687.632:507): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2893 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.632000 audit[3131]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2893 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.632000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631656532636666343935373738323065393032376266386539373066 Jan 14 01:21:27.672306 kernel: audit: type=1327 audit(1768353687.632:507): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631656532636666343935373738323065393032376266386539373066 Jan 14 01:21:27.632000 audit: BPF prog-id=147 op=UNLOAD Jan 14 01:21:27.676986 kernel: audit: type=1334 audit(1768353687.632:508): prog-id=147 op=UNLOAD Jan 14 01:21:27.677118 kernel: audit: type=1300 audit(1768353687.632:508): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2893 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.632000 audit[3131]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2893 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.632000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631656532636666343935373738323065393032376266386539373066 Jan 14 01:21:27.697733 containerd[1611]: time="2026-01-14T01:21:27.697656816Z" level=info msg="StartContainer for \"61ee2cff49577820e9027bf8e970f0d145ef6c8152ee7ab72908043efae2ffc1\" returns successfully" Jan 14 01:21:27.703453 kernel: audit: type=1327 audit(1768353687.632:508): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631656532636666343935373738323065393032376266386539373066 Jan 14 01:21:27.632000 audit: BPF prog-id=148 op=LOAD Jan 14 01:21:27.632000 audit[3131]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2893 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.721152 kernel: audit: type=1334 audit(1768353687.632:509): prog-id=148 op=LOAD Jan 14 01:21:27.721262 kernel: audit: type=1300 audit(1768353687.632:509): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2893 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.721290 kernel: audit: type=1327 audit(1768353687.632:509): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631656532636666343935373738323065393032376266386539373066 Jan 14 01:21:27.632000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631656532636666343935373738323065393032376266386539373066 Jan 14 01:21:27.632000 audit: BPF prog-id=149 op=LOAD Jan 14 01:21:27.632000 audit[3131]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2893 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.632000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631656532636666343935373738323065393032376266386539373066 Jan 14 01:21:27.632000 audit: BPF prog-id=149 op=UNLOAD Jan 14 01:21:27.632000 audit[3131]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2893 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.632000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631656532636666343935373738323065393032376266386539373066 Jan 14 01:21:27.632000 audit: BPF prog-id=148 op=UNLOAD Jan 14 01:21:27.632000 audit[3131]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2893 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.632000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631656532636666343935373738323065393032376266386539373066 Jan 14 01:21:27.632000 audit: BPF prog-id=150 op=LOAD Jan 14 01:21:27.632000 audit[3131]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2893 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.632000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631656532636666343935373738323065393032376266386539373066 Jan 14 01:21:29.259856 kubelet[2787]: E0114 01:21:29.259770 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:21:29.289414 kubelet[2787]: I0114 01:21:29.289269 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-zd5kz" podStartSLOduration=2.398021788 podStartE2EDuration="7.289249922s" podCreationTimestamp="2026-01-14 01:21:22 +0000 UTC" firstStartedPulling="2026-01-14 01:21:22.651694427 +0000 UTC m=+6.429261483" lastFinishedPulling="2026-01-14 01:21:27.542922561 +0000 UTC m=+11.320489617" observedRunningTime="2026-01-14 01:21:28.45967497 +0000 UTC m=+12.237242025" watchObservedRunningTime="2026-01-14 01:21:29.289249922 +0000 UTC m=+13.066816977" Jan 14 01:21:33.900615 sudo[1825]: pam_unix(sudo:session): session closed for user root Jan 14 01:21:33.899000 audit[1825]: USER_END pid=1825 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:21:33.903553 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 14 01:21:33.903606 kernel: audit: type=1106 audit(1768353693.899:514): pid=1825 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:21:33.926756 kernel: audit: type=1104 audit(1768353693.899:515): pid=1825 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:21:33.899000 audit[1825]: CRED_DISP pid=1825 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:21:33.926895 sshd[1824]: Connection closed by 10.0.0.1 port 52694 Jan 14 01:21:33.919949 sshd-session[1820]: pam_unix(sshd:session): session closed for user core Jan 14 01:21:33.921000 audit[1820]: USER_END pid=1820 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:21:33.930827 systemd[1]: sshd@6-10.0.0.134:22-10.0.0.1:52694.service: Deactivated successfully. Jan 14 01:21:33.933425 systemd[1]: session-8.scope: Deactivated successfully. Jan 14 01:21:33.938065 systemd[1]: session-8.scope: Consumed 5.654s CPU time, 214.5M memory peak. Jan 14 01:21:33.943819 kernel: audit: type=1106 audit(1768353693.921:516): pid=1820 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:21:33.945021 kernel: audit: type=1104 audit(1768353693.921:517): pid=1820 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:21:33.921000 audit[1820]: CRED_DISP pid=1820 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:21:33.946631 systemd-logind[1587]: Session 8 logged out. Waiting for processes to exit. Jan 14 01:21:33.950662 systemd-logind[1587]: Removed session 8. Jan 14 01:21:33.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.134:22-10.0.0.1:52694 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:21:33.970911 kernel: audit: type=1131 audit(1768353693.929:518): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.134:22-10.0.0.1:52694 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:21:34.437000 audit[3223]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3223 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:34.451674 kernel: audit: type=1325 audit(1768353694.437:519): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3223 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:34.437000 audit[3223]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff314db170 a2=0 a3=7fff314db15c items=0 ppid=2947 pid=3223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:34.474722 kernel: audit: type=1300 audit(1768353694.437:519): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff314db170 a2=0 a3=7fff314db15c items=0 ppid=2947 pid=3223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:34.474817 kernel: audit: type=1327 audit(1768353694.437:519): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:34.437000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:34.454000 audit[3223]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3223 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:34.490702 kernel: audit: type=1325 audit(1768353694.454:520): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3223 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:34.454000 audit[3223]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff314db170 a2=0 a3=0 items=0 ppid=2947 pid=3223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:34.505683 kernel: audit: type=1300 audit(1768353694.454:520): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff314db170 a2=0 a3=0 items=0 ppid=2947 pid=3223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:34.454000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:34.500000 audit[3225]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3225 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:34.500000 audit[3225]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffdcacf9e20 a2=0 a3=7ffdcacf9e0c items=0 ppid=2947 pid=3225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:34.500000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:34.517000 audit[3225]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3225 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:34.517000 audit[3225]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdcacf9e20 a2=0 a3=0 items=0 ppid=2947 pid=3225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:34.517000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:37.032000 audit[3228]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3228 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:37.032000 audit[3228]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffda6035bb0 a2=0 a3=7ffda6035b9c items=0 ppid=2947 pid=3228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:37.032000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:37.039000 audit[3228]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3228 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:37.039000 audit[3228]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffda6035bb0 a2=0 a3=0 items=0 ppid=2947 pid=3228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:37.039000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:37.071000 audit[3230]: NETFILTER_CFG table=filter:111 family=2 entries=19 op=nft_register_rule pid=3230 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:37.071000 audit[3230]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd74b13500 a2=0 a3=7ffd74b134ec items=0 ppid=2947 pid=3230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:37.071000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:37.079000 audit[3230]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3230 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:37.079000 audit[3230]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd74b13500 a2=0 a3=0 items=0 ppid=2947 pid=3230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:37.079000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:38.763000 audit[3232]: NETFILTER_CFG table=filter:113 family=2 entries=21 op=nft_register_rule pid=3232 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:38.763000 audit[3232]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc82e96a90 a2=0 a3=7ffc82e96a7c items=0 ppid=2947 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:38.763000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:38.771000 audit[3232]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3232 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:38.771000 audit[3232]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc82e96a90 a2=0 a3=0 items=0 ppid=2947 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:38.771000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:38.812803 systemd[1]: Created slice kubepods-besteffort-podacf4fe82_1437_4438_8ffe_466bcecc3771.slice - libcontainer container kubepods-besteffort-podacf4fe82_1437_4438_8ffe_466bcecc3771.slice. Jan 14 01:21:38.831554 kubelet[2787]: I0114 01:21:38.831423 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acf4fe82-1437-4438-8ffe-466bcecc3771-tigera-ca-bundle\") pod \"calico-typha-6899d499b8-c9qdb\" (UID: \"acf4fe82-1437-4438-8ffe-466bcecc3771\") " pod="calico-system/calico-typha-6899d499b8-c9qdb" Jan 14 01:21:38.832813 kubelet[2787]: I0114 01:21:38.832776 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqdx8\" (UniqueName: \"kubernetes.io/projected/acf4fe82-1437-4438-8ffe-466bcecc3771-kube-api-access-sqdx8\") pod \"calico-typha-6899d499b8-c9qdb\" (UID: \"acf4fe82-1437-4438-8ffe-466bcecc3771\") " pod="calico-system/calico-typha-6899d499b8-c9qdb" Jan 14 01:21:38.833685 kubelet[2787]: I0114 01:21:38.832893 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/acf4fe82-1437-4438-8ffe-466bcecc3771-typha-certs\") pod \"calico-typha-6899d499b8-c9qdb\" (UID: \"acf4fe82-1437-4438-8ffe-466bcecc3771\") " pod="calico-system/calico-typha-6899d499b8-c9qdb" Jan 14 01:21:38.930927 systemd[1]: Created slice kubepods-besteffort-pod7bfdd97b_81a8_4c49_8938_8a4c7d1b42e5.slice - libcontainer container kubepods-besteffort-pod7bfdd97b_81a8_4c49_8938_8a4c7d1b42e5.slice. Jan 14 01:21:38.934185 kubelet[2787]: I0114 01:21:38.934158 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7bfdd97b-81a8-4c49-8938-8a4c7d1b42e5-cni-net-dir\") pod \"calico-node-8swx8\" (UID: \"7bfdd97b-81a8-4c49-8938-8a4c7d1b42e5\") " pod="calico-system/calico-node-8swx8" Jan 14 01:21:38.934479 kubelet[2787]: I0114 01:21:38.934462 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7bfdd97b-81a8-4c49-8938-8a4c7d1b42e5-node-certs\") pod \"calico-node-8swx8\" (UID: \"7bfdd97b-81a8-4c49-8938-8a4c7d1b42e5\") " pod="calico-system/calico-node-8swx8" Jan 14 01:21:38.934651 kubelet[2787]: I0114 01:21:38.934637 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7bfdd97b-81a8-4c49-8938-8a4c7d1b42e5-cni-log-dir\") pod \"calico-node-8swx8\" (UID: \"7bfdd97b-81a8-4c49-8938-8a4c7d1b42e5\") " pod="calico-system/calico-node-8swx8" Jan 14 01:21:38.934947 kubelet[2787]: I0114 01:21:38.934851 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7bfdd97b-81a8-4c49-8938-8a4c7d1b42e5-var-run-calico\") pod \"calico-node-8swx8\" (UID: \"7bfdd97b-81a8-4c49-8938-8a4c7d1b42e5\") " pod="calico-system/calico-node-8swx8" Jan 14 01:21:38.935221 kubelet[2787]: I0114 01:21:38.935204 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7bfdd97b-81a8-4c49-8938-8a4c7d1b42e5-xtables-lock\") pod \"calico-node-8swx8\" (UID: \"7bfdd97b-81a8-4c49-8938-8a4c7d1b42e5\") " pod="calico-system/calico-node-8swx8" Jan 14 01:21:38.935298 kubelet[2787]: I0114 01:21:38.935286 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7bfdd97b-81a8-4c49-8938-8a4c7d1b42e5-cni-bin-dir\") pod \"calico-node-8swx8\" (UID: \"7bfdd97b-81a8-4c49-8938-8a4c7d1b42e5\") " pod="calico-system/calico-node-8swx8" Jan 14 01:21:38.935352 kubelet[2787]: I0114 01:21:38.935339 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bfdd97b-81a8-4c49-8938-8a4c7d1b42e5-tigera-ca-bundle\") pod \"calico-node-8swx8\" (UID: \"7bfdd97b-81a8-4c49-8938-8a4c7d1b42e5\") " pod="calico-system/calico-node-8swx8" Jan 14 01:21:38.936221 kubelet[2787]: I0114 01:21:38.935606 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7bfdd97b-81a8-4c49-8938-8a4c7d1b42e5-var-lib-calico\") pod \"calico-node-8swx8\" (UID: \"7bfdd97b-81a8-4c49-8938-8a4c7d1b42e5\") " pod="calico-system/calico-node-8swx8" Jan 14 01:21:38.936221 kubelet[2787]: I0114 01:21:38.935764 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7bfdd97b-81a8-4c49-8938-8a4c7d1b42e5-policysync\") pod \"calico-node-8swx8\" (UID: \"7bfdd97b-81a8-4c49-8938-8a4c7d1b42e5\") " pod="calico-system/calico-node-8swx8" Jan 14 01:21:38.936221 kubelet[2787]: I0114 01:21:38.935778 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7bfdd97b-81a8-4c49-8938-8a4c7d1b42e5-flexvol-driver-host\") pod \"calico-node-8swx8\" (UID: \"7bfdd97b-81a8-4c49-8938-8a4c7d1b42e5\") " pod="calico-system/calico-node-8swx8" Jan 14 01:21:38.936221 kubelet[2787]: I0114 01:21:38.935799 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7bfdd97b-81a8-4c49-8938-8a4c7d1b42e5-lib-modules\") pod \"calico-node-8swx8\" (UID: \"7bfdd97b-81a8-4c49-8938-8a4c7d1b42e5\") " pod="calico-system/calico-node-8swx8" Jan 14 01:21:38.936221 kubelet[2787]: I0114 01:21:38.935814 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnw4b\" (UniqueName: \"kubernetes.io/projected/7bfdd97b-81a8-4c49-8938-8a4c7d1b42e5-kube-api-access-nnw4b\") pod \"calico-node-8swx8\" (UID: \"7bfdd97b-81a8-4c49-8938-8a4c7d1b42e5\") " pod="calico-system/calico-node-8swx8" Jan 14 01:21:39.038854 kubelet[2787]: E0114 01:21:39.038608 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.038854 kubelet[2787]: W0114 01:21:39.038674 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.039961 kubelet[2787]: E0114 01:21:39.039886 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.040483 kubelet[2787]: E0114 01:21:39.040364 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.040483 kubelet[2787]: W0114 01:21:39.040379 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.040483 kubelet[2787]: E0114 01:21:39.040399 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.041247 kubelet[2787]: E0114 01:21:39.041158 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.041247 kubelet[2787]: W0114 01:21:39.041176 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.041247 kubelet[2787]: E0114 01:21:39.041192 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.043068 kubelet[2787]: E0114 01:21:39.043008 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.043068 kubelet[2787]: W0114 01:21:39.043056 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.043181 kubelet[2787]: E0114 01:21:39.043071 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.044350 kubelet[2787]: E0114 01:21:39.044212 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.044350 kubelet[2787]: W0114 01:21:39.044270 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.044350 kubelet[2787]: E0114 01:21:39.044288 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.050765 kubelet[2787]: E0114 01:21:39.050643 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.050765 kubelet[2787]: W0114 01:21:39.050703 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.050765 kubelet[2787]: E0114 01:21:39.050728 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.052400 kubelet[2787]: E0114 01:21:39.052377 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.052400 kubelet[2787]: W0114 01:21:39.052394 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.052458 kubelet[2787]: E0114 01:21:39.052409 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.120310 kubelet[2787]: E0114 01:21:39.120101 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qqxnp" podUID="ba3d93c2-390e-4ba5-bb19-4864194c73f7" Jan 14 01:21:39.129355 kubelet[2787]: E0114 01:21:39.129078 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:21:39.130102 containerd[1611]: time="2026-01-14T01:21:39.130073929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6899d499b8-c9qdb,Uid:acf4fe82-1437-4438-8ffe-466bcecc3771,Namespace:calico-system,Attempt:0,}" Jan 14 01:21:39.172629 containerd[1611]: time="2026-01-14T01:21:39.171710012Z" level=info msg="connecting to shim feda42230b6ed958f6c1c9c38935a1ee01f8d000f4ae433516f6689e6b1e2ab6" address="unix:///run/containerd/s/66a5fa5f6394addddaf235618ceb03c696f15e19fa13273e99a7c35eb045c9f9" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:21:39.221932 kubelet[2787]: E0114 01:21:39.221269 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.221932 kubelet[2787]: W0114 01:21:39.221353 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.221932 kubelet[2787]: E0114 01:21:39.221384 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.225640 kubelet[2787]: E0114 01:21:39.225585 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.225640 kubelet[2787]: W0114 01:21:39.225609 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.225640 kubelet[2787]: E0114 01:21:39.225633 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.227768 kubelet[2787]: E0114 01:21:39.227656 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.227768 kubelet[2787]: W0114 01:21:39.227726 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.227768 kubelet[2787]: E0114 01:21:39.227748 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.230653 kubelet[2787]: E0114 01:21:39.230487 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.231953 kubelet[2787]: W0114 01:21:39.230824 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.231953 kubelet[2787]: E0114 01:21:39.231853 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.234399 kubelet[2787]: E0114 01:21:39.234361 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.234399 kubelet[2787]: W0114 01:21:39.234379 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.234399 kubelet[2787]: E0114 01:21:39.234398 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.236560 kubelet[2787]: E0114 01:21:39.236454 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.236617 kubelet[2787]: W0114 01:21:39.236496 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.236617 kubelet[2787]: E0114 01:21:39.236609 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.237221 kubelet[2787]: E0114 01:21:39.237184 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.237221 kubelet[2787]: W0114 01:21:39.237199 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.237221 kubelet[2787]: E0114 01:21:39.237214 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.238105 kubelet[2787]: E0114 01:21:39.238035 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.238105 kubelet[2787]: W0114 01:21:39.238098 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.238221 kubelet[2787]: E0114 01:21:39.238116 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.238708 kubelet[2787]: E0114 01:21:39.238489 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:21:39.240448 kubelet[2787]: E0114 01:21:39.240092 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.240448 kubelet[2787]: W0114 01:21:39.240186 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.240448 kubelet[2787]: E0114 01:21:39.240204 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.241608 kubelet[2787]: E0114 01:21:39.240781 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.241608 kubelet[2787]: W0114 01:21:39.240838 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.241608 kubelet[2787]: E0114 01:21:39.240852 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.241608 kubelet[2787]: E0114 01:21:39.241306 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.241608 kubelet[2787]: W0114 01:21:39.241318 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.241608 kubelet[2787]: E0114 01:21:39.241330 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.241904 containerd[1611]: time="2026-01-14T01:21:39.241861118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8swx8,Uid:7bfdd97b-81a8-4c49-8938-8a4c7d1b42e5,Namespace:calico-system,Attempt:0,}" Jan 14 01:21:39.242351 kubelet[2787]: E0114 01:21:39.242315 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.242351 kubelet[2787]: W0114 01:21:39.242330 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.242351 kubelet[2787]: E0114 01:21:39.242343 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.242982 kubelet[2787]: E0114 01:21:39.242817 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.242982 kubelet[2787]: W0114 01:21:39.242872 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.242982 kubelet[2787]: E0114 01:21:39.242886 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.245907 kubelet[2787]: E0114 01:21:39.245749 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.245907 kubelet[2787]: W0114 01:21:39.245811 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.245907 kubelet[2787]: E0114 01:21:39.245826 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.246377 kubelet[2787]: E0114 01:21:39.246342 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.246377 kubelet[2787]: W0114 01:21:39.246358 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.246377 kubelet[2787]: E0114 01:21:39.246369 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.246939 kubelet[2787]: E0114 01:21:39.246917 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.246939 kubelet[2787]: W0114 01:21:39.246928 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.246984 kubelet[2787]: E0114 01:21:39.246940 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.247425 kubelet[2787]: E0114 01:21:39.247373 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.247425 kubelet[2787]: W0114 01:21:39.247387 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.247425 kubelet[2787]: E0114 01:21:39.247399 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.248355 kubelet[2787]: E0114 01:21:39.248342 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.249224 kubelet[2787]: W0114 01:21:39.248482 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.249249 systemd[1]: Started cri-containerd-feda42230b6ed958f6c1c9c38935a1ee01f8d000f4ae433516f6689e6b1e2ab6.scope - libcontainer container feda42230b6ed958f6c1c9c38935a1ee01f8d000f4ae433516f6689e6b1e2ab6. Jan 14 01:21:39.249608 kubelet[2787]: E0114 01:21:39.248497 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.250478 kubelet[2787]: E0114 01:21:39.250465 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.250632 kubelet[2787]: W0114 01:21:39.250619 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.250801 kubelet[2787]: E0114 01:21:39.250753 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.252483 kubelet[2787]: E0114 01:21:39.252359 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.252655 kubelet[2787]: W0114 01:21:39.252641 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.252866 kubelet[2787]: E0114 01:21:39.252710 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.254332 kubelet[2787]: E0114 01:21:39.254044 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.254332 kubelet[2787]: W0114 01:21:39.254057 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.254332 kubelet[2787]: E0114 01:21:39.254066 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.254332 kubelet[2787]: I0114 01:21:39.254181 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ba3d93c2-390e-4ba5-bb19-4864194c73f7-kubelet-dir\") pod \"csi-node-driver-qqxnp\" (UID: \"ba3d93c2-390e-4ba5-bb19-4864194c73f7\") " pod="calico-system/csi-node-driver-qqxnp" Jan 14 01:21:39.254671 kubelet[2787]: E0114 01:21:39.254657 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.254724 kubelet[2787]: W0114 01:21:39.254713 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.254766 kubelet[2787]: E0114 01:21:39.254757 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.254815 kubelet[2787]: I0114 01:21:39.254804 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ba3d93c2-390e-4ba5-bb19-4864194c73f7-varrun\") pod \"csi-node-driver-qqxnp\" (UID: \"ba3d93c2-390e-4ba5-bb19-4864194c73f7\") " pod="calico-system/csi-node-driver-qqxnp" Jan 14 01:21:39.255441 kubelet[2787]: E0114 01:21:39.255423 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.255733 kubelet[2787]: W0114 01:21:39.255713 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.255926 kubelet[2787]: E0114 01:21:39.255904 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.256194 kubelet[2787]: I0114 01:21:39.256032 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9ggb\" (UniqueName: \"kubernetes.io/projected/ba3d93c2-390e-4ba5-bb19-4864194c73f7-kube-api-access-d9ggb\") pod \"csi-node-driver-qqxnp\" (UID: \"ba3d93c2-390e-4ba5-bb19-4864194c73f7\") " pod="calico-system/csi-node-driver-qqxnp" Jan 14 01:21:39.256808 kubelet[2787]: E0114 01:21:39.256795 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.257041 kubelet[2787]: W0114 01:21:39.256950 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.257100 kubelet[2787]: E0114 01:21:39.257089 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.257339 kubelet[2787]: I0114 01:21:39.257325 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ba3d93c2-390e-4ba5-bb19-4864194c73f7-socket-dir\") pod \"csi-node-driver-qqxnp\" (UID: \"ba3d93c2-390e-4ba5-bb19-4864194c73f7\") " pod="calico-system/csi-node-driver-qqxnp" Jan 14 01:21:39.257965 kubelet[2787]: E0114 01:21:39.257812 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.257965 kubelet[2787]: W0114 01:21:39.257822 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.257965 kubelet[2787]: E0114 01:21:39.257832 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.259449 kubelet[2787]: E0114 01:21:39.259011 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.259995 kubelet[2787]: W0114 01:21:39.259874 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.259995 kubelet[2787]: E0114 01:21:39.259891 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.260811 kubelet[2787]: E0114 01:21:39.260762 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.260811 kubelet[2787]: W0114 01:21:39.260777 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.260811 kubelet[2787]: E0114 01:21:39.260791 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.261644 kubelet[2787]: E0114 01:21:39.261464 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.261644 kubelet[2787]: W0114 01:21:39.261480 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.261644 kubelet[2787]: E0114 01:21:39.261494 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.263646 kubelet[2787]: E0114 01:21:39.263610 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.263646 kubelet[2787]: W0114 01:21:39.263623 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.263646 kubelet[2787]: E0114 01:21:39.263632 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.263987 kubelet[2787]: I0114 01:21:39.263969 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ba3d93c2-390e-4ba5-bb19-4864194c73f7-registration-dir\") pod \"csi-node-driver-qqxnp\" (UID: \"ba3d93c2-390e-4ba5-bb19-4864194c73f7\") " pod="calico-system/csi-node-driver-qqxnp" Jan 14 01:21:39.264979 kubelet[2787]: E0114 01:21:39.264930 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.264979 kubelet[2787]: W0114 01:21:39.264946 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.264979 kubelet[2787]: E0114 01:21:39.264959 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.265704 kubelet[2787]: E0114 01:21:39.265413 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.265704 kubelet[2787]: W0114 01:21:39.265424 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.265704 kubelet[2787]: E0114 01:21:39.265432 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.266213 kubelet[2787]: E0114 01:21:39.266201 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.266265 kubelet[2787]: W0114 01:21:39.266256 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.266307 kubelet[2787]: E0114 01:21:39.266298 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.266697 kubelet[2787]: E0114 01:21:39.266686 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.266750 kubelet[2787]: W0114 01:21:39.266740 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.266790 kubelet[2787]: E0114 01:21:39.266782 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.267761 kubelet[2787]: E0114 01:21:39.267747 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.267817 kubelet[2787]: W0114 01:21:39.267807 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.267876 kubelet[2787]: E0114 01:21:39.267860 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.268371 kubelet[2787]: E0114 01:21:39.268338 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.268371 kubelet[2787]: W0114 01:21:39.268348 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.268371 kubelet[2787]: E0114 01:21:39.268357 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.286637 containerd[1611]: time="2026-01-14T01:21:39.286408413Z" level=info msg="connecting to shim 7005a825cc642e05696c3f5c3c2eb3128eb14408e5b2f2c24f62131c542a8ad9" address="unix:///run/containerd/s/698d34ad6847d97851061808812649728c49fb12f03ac9d3d0e0acc1546ea0a8" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:21:39.287000 audit: BPF prog-id=151 op=LOAD Jan 14 01:21:39.294887 kernel: kauditd_printk_skb: 25 callbacks suppressed Jan 14 01:21:39.294944 kernel: audit: type=1334 audit(1768353699.287:529): prog-id=151 op=LOAD Jan 14 01:21:39.287000 audit: BPF prog-id=152 op=LOAD Jan 14 01:21:39.302900 kernel: audit: type=1334 audit(1768353699.287:530): prog-id=152 op=LOAD Jan 14 01:21:39.287000 audit[3274]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=3263 pid=3274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:39.316270 kernel: audit: type=1300 audit(1768353699.287:530): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=3263 pid=3274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:39.287000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665646134323233306236656439353866366331633963333839333561 Jan 14 01:21:39.330572 kernel: audit: type=1327 audit(1768353699.287:530): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665646134323233306236656439353866366331633963333839333561 Jan 14 01:21:39.330631 kernel: audit: type=1334 audit(1768353699.287:531): prog-id=152 op=UNLOAD Jan 14 01:21:39.287000 audit: BPF prog-id=152 op=UNLOAD Jan 14 01:21:39.287000 audit[3274]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3263 pid=3274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:39.346292 kernel: audit: type=1300 audit(1768353699.287:531): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3263 pid=3274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:39.287000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665646134323233306236656439353866366331633963333839333561 Jan 14 01:21:39.287000 audit: BPF prog-id=153 op=LOAD Jan 14 01:21:39.365285 kernel: audit: type=1327 audit(1768353699.287:531): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665646134323233306236656439353866366331633963333839333561 Jan 14 01:21:39.365394 kernel: audit: type=1334 audit(1768353699.287:532): prog-id=153 op=LOAD Jan 14 01:21:39.365414 kernel: audit: type=1300 audit(1768353699.287:532): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3263 pid=3274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:39.287000 audit[3274]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3263 pid=3274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:39.372002 kubelet[2787]: E0114 01:21:39.371909 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.372002 kubelet[2787]: W0114 01:21:39.371972 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.372221 kubelet[2787]: E0114 01:21:39.372090 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.375139 kubelet[2787]: E0114 01:21:39.374987 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.375363 kubelet[2787]: W0114 01:21:39.375263 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.375426 kubelet[2787]: E0114 01:21:39.375415 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.376966 kubelet[2787]: E0114 01:21:39.376855 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.376966 kubelet[2787]: W0114 01:21:39.376867 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.376966 kubelet[2787]: E0114 01:21:39.376880 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.379972 kernel: audit: type=1327 audit(1768353699.287:532): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665646134323233306236656439353866366331633963333839333561 Jan 14 01:21:39.287000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665646134323233306236656439353866366331633963333839333561 Jan 14 01:21:39.380324 kubelet[2787]: E0114 01:21:39.379228 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.380324 kubelet[2787]: W0114 01:21:39.379238 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.380324 kubelet[2787]: E0114 01:21:39.379248 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.383825 kubelet[2787]: E0114 01:21:39.383354 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.383825 kubelet[2787]: W0114 01:21:39.383750 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.384084 kubelet[2787]: E0114 01:21:39.384016 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.386851 kubelet[2787]: E0114 01:21:39.386768 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.386851 kubelet[2787]: W0114 01:21:39.386831 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.386851 kubelet[2787]: E0114 01:21:39.386848 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.387455 kubelet[2787]: E0114 01:21:39.387377 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.387455 kubelet[2787]: W0114 01:21:39.387436 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.387569 kubelet[2787]: E0114 01:21:39.387454 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.388646 kubelet[2787]: E0114 01:21:39.387914 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.388646 kubelet[2787]: W0114 01:21:39.387928 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.388646 kubelet[2787]: E0114 01:21:39.387941 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.389830 kubelet[2787]: E0114 01:21:39.389755 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.389830 kubelet[2787]: W0114 01:21:39.389809 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.389830 kubelet[2787]: E0114 01:21:39.389825 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.390289 kubelet[2787]: E0114 01:21:39.390145 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.390289 kubelet[2787]: W0114 01:21:39.390255 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.390289 kubelet[2787]: E0114 01:21:39.390269 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.391192 kubelet[2787]: E0114 01:21:39.390997 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.391192 kubelet[2787]: W0114 01:21:39.391049 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.391192 kubelet[2787]: E0114 01:21:39.391062 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.287000 audit: BPF prog-id=154 op=LOAD Jan 14 01:21:39.287000 audit[3274]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3263 pid=3274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:39.287000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665646134323233306236656439353866366331633963333839333561 Jan 14 01:21:39.287000 audit: BPF prog-id=154 op=UNLOAD Jan 14 01:21:39.287000 audit[3274]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3263 pid=3274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:39.287000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665646134323233306236656439353866366331633963333839333561 Jan 14 01:21:39.287000 audit: BPF prog-id=153 op=UNLOAD Jan 14 01:21:39.287000 audit[3274]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3263 pid=3274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:39.287000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665646134323233306236656439353866366331633963333839333561 Jan 14 01:21:39.287000 audit: BPF prog-id=155 op=LOAD Jan 14 01:21:39.287000 audit[3274]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=3263 pid=3274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:39.287000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665646134323233306236656439353866366331633963333839333561 Jan 14 01:21:39.392498 kubelet[2787]: E0114 01:21:39.391984 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.392498 kubelet[2787]: W0114 01:21:39.391997 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.392498 kubelet[2787]: E0114 01:21:39.392011 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.392498 kubelet[2787]: E0114 01:21:39.392403 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.392498 kubelet[2787]: W0114 01:21:39.392413 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.392498 kubelet[2787]: E0114 01:21:39.392425 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.393099 kubelet[2787]: E0114 01:21:39.392952 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.393099 kubelet[2787]: W0114 01:21:39.392968 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.393099 kubelet[2787]: E0114 01:21:39.392980 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.393763 kubelet[2787]: E0114 01:21:39.393447 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.394694 kubelet[2787]: W0114 01:21:39.394615 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.394694 kubelet[2787]: E0114 01:21:39.394677 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.395297 kubelet[2787]: E0114 01:21:39.395221 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.395297 kubelet[2787]: W0114 01:21:39.395278 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.395297 kubelet[2787]: E0114 01:21:39.395292 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.395954 kubelet[2787]: E0114 01:21:39.395898 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.395993 kubelet[2787]: W0114 01:21:39.395955 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.395993 kubelet[2787]: E0114 01:21:39.395970 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.396709 kubelet[2787]: E0114 01:21:39.396636 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.396709 kubelet[2787]: W0114 01:21:39.396692 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.396709 kubelet[2787]: E0114 01:21:39.396707 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.397273 kubelet[2787]: E0114 01:21:39.397222 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.397273 kubelet[2787]: W0114 01:21:39.397241 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.397273 kubelet[2787]: E0114 01:21:39.397256 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.397726 systemd[1]: Started cri-containerd-7005a825cc642e05696c3f5c3c2eb3128eb14408e5b2f2c24f62131c542a8ad9.scope - libcontainer container 7005a825cc642e05696c3f5c3c2eb3128eb14408e5b2f2c24f62131c542a8ad9. Jan 14 01:21:39.398675 kubelet[2787]: E0114 01:21:39.398624 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.398675 kubelet[2787]: W0114 01:21:39.398643 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.398675 kubelet[2787]: E0114 01:21:39.398656 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.401074 kubelet[2787]: E0114 01:21:39.400958 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.401074 kubelet[2787]: W0114 01:21:39.400973 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.401074 kubelet[2787]: E0114 01:21:39.400984 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.401432 kubelet[2787]: E0114 01:21:39.401401 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.401432 kubelet[2787]: W0114 01:21:39.401412 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.401432 kubelet[2787]: E0114 01:21:39.401421 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.403492 kubelet[2787]: E0114 01:21:39.403364 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.403647 kubelet[2787]: W0114 01:21:39.403630 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.404382 kubelet[2787]: E0114 01:21:39.403721 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.406739 kubelet[2787]: E0114 01:21:39.406611 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.406927 kubelet[2787]: W0114 01:21:39.406854 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.406988 kubelet[2787]: E0114 01:21:39.406976 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.408028 kubelet[2787]: E0114 01:21:39.408013 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.408028 kubelet[2787]: W0114 01:21:39.408026 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.408119 kubelet[2787]: E0114 01:21:39.408037 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.408211 containerd[1611]: time="2026-01-14T01:21:39.408086319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6899d499b8-c9qdb,Uid:acf4fe82-1437-4438-8ffe-466bcecc3771,Namespace:calico-system,Attempt:0,} returns sandbox id \"feda42230b6ed958f6c1c9c38935a1ee01f8d000f4ae433516f6689e6b1e2ab6\"" Jan 14 01:21:39.412146 kubelet[2787]: E0114 01:21:39.412051 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:21:39.416726 containerd[1611]: time="2026-01-14T01:21:39.416439995Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 14 01:21:39.420065 kubelet[2787]: E0114 01:21:39.419985 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:39.420764 kubelet[2787]: W0114 01:21:39.420686 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:39.420764 kubelet[2787]: E0114 01:21:39.420753 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:39.436000 audit: BPF prog-id=156 op=LOAD Jan 14 01:21:39.437000 audit: BPF prog-id=157 op=LOAD Jan 14 01:21:39.437000 audit[3348]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3336 pid=3348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:39.437000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730303561383235636336343265303536393663336635633363326562 Jan 14 01:21:39.438000 audit: BPF prog-id=157 op=UNLOAD Jan 14 01:21:39.438000 audit[3348]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=3348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:39.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730303561383235636336343265303536393663336635633363326562 Jan 14 01:21:39.438000 audit: BPF prog-id=158 op=LOAD Jan 14 01:21:39.438000 audit[3348]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3336 pid=3348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:39.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730303561383235636336343265303536393663336635633363326562 Jan 14 01:21:39.438000 audit: BPF prog-id=159 op=LOAD Jan 14 01:21:39.438000 audit[3348]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3336 pid=3348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:39.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730303561383235636336343265303536393663336635633363326562 Jan 14 01:21:39.438000 audit: BPF prog-id=159 op=UNLOAD Jan 14 01:21:39.438000 audit[3348]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=3348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:39.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730303561383235636336343265303536393663336635633363326562 Jan 14 01:21:39.438000 audit: BPF prog-id=158 op=UNLOAD Jan 14 01:21:39.438000 audit[3348]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=3348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:39.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730303561383235636336343265303536393663336635633363326562 Jan 14 01:21:39.438000 audit: BPF prog-id=160 op=LOAD Jan 14 01:21:39.438000 audit[3348]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3336 pid=3348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:39.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730303561383235636336343265303536393663336635633363326562 Jan 14 01:21:39.487049 containerd[1611]: time="2026-01-14T01:21:39.487005746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8swx8,Uid:7bfdd97b-81a8-4c49-8938-8a4c7d1b42e5,Namespace:calico-system,Attempt:0,} returns sandbox id \"7005a825cc642e05696c3f5c3c2eb3128eb14408e5b2f2c24f62131c542a8ad9\"" Jan 14 01:21:39.488792 kubelet[2787]: E0114 01:21:39.488678 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:21:39.789000 audit[3408]: NETFILTER_CFG table=filter:115 family=2 entries=22 op=nft_register_rule pid=3408 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:39.789000 audit[3408]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffe4870cdc0 a2=0 a3=7ffe4870cdac items=0 ppid=2947 pid=3408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:39.789000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:39.794000 audit[3408]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3408 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:39.794000 audit[3408]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe4870cdc0 a2=0 a3=0 items=0 ppid=2947 pid=3408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:39.794000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:40.392742 kubelet[2787]: E0114 01:21:40.392644 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qqxnp" podUID="ba3d93c2-390e-4ba5-bb19-4864194c73f7" Jan 14 01:21:42.148154 containerd[1611]: time="2026-01-14T01:21:42.148026824Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:21:42.149753 containerd[1611]: time="2026-01-14T01:21:42.149624535Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 14 01:21:42.150882 containerd[1611]: time="2026-01-14T01:21:42.150834536Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:21:42.155094 containerd[1611]: time="2026-01-14T01:21:42.154995632Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:21:42.155834 containerd[1611]: time="2026-01-14T01:21:42.155372836Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.738654987s" Jan 14 01:21:42.155834 containerd[1611]: time="2026-01-14T01:21:42.155405112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 14 01:21:42.156471 containerd[1611]: time="2026-01-14T01:21:42.156327629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 14 01:21:42.178734 containerd[1611]: time="2026-01-14T01:21:42.178683522Z" level=info msg="CreateContainer within sandbox \"feda42230b6ed958f6c1c9c38935a1ee01f8d000f4ae433516f6689e6b1e2ab6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 14 01:21:42.197177 containerd[1611]: time="2026-01-14T01:21:42.195980081Z" level=info msg="Container 169cef9e20bea2822f0200e44126ba59c832d3542610dbe0d38e9cba324a15f5: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:21:42.208063 containerd[1611]: time="2026-01-14T01:21:42.207907534Z" level=info msg="CreateContainer within sandbox \"feda42230b6ed958f6c1c9c38935a1ee01f8d000f4ae433516f6689e6b1e2ab6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"169cef9e20bea2822f0200e44126ba59c832d3542610dbe0d38e9cba324a15f5\"" Jan 14 01:21:42.208925 containerd[1611]: time="2026-01-14T01:21:42.208671562Z" level=info msg="StartContainer for \"169cef9e20bea2822f0200e44126ba59c832d3542610dbe0d38e9cba324a15f5\"" Jan 14 01:21:42.211307 containerd[1611]: time="2026-01-14T01:21:42.211094260Z" level=info msg="connecting to shim 169cef9e20bea2822f0200e44126ba59c832d3542610dbe0d38e9cba324a15f5" address="unix:///run/containerd/s/66a5fa5f6394addddaf235618ceb03c696f15e19fa13273e99a7c35eb045c9f9" protocol=ttrpc version=3 Jan 14 01:21:42.253270 systemd[1]: Started cri-containerd-169cef9e20bea2822f0200e44126ba59c832d3542610dbe0d38e9cba324a15f5.scope - libcontainer container 169cef9e20bea2822f0200e44126ba59c832d3542610dbe0d38e9cba324a15f5. Jan 14 01:21:42.280000 audit: BPF prog-id=161 op=LOAD Jan 14 01:21:42.281000 audit: BPF prog-id=162 op=LOAD Jan 14 01:21:42.281000 audit[3419]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3263 pid=3419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:42.281000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136396365663965323062656132383232663032303065343431323662 Jan 14 01:21:42.281000 audit: BPF prog-id=162 op=UNLOAD Jan 14 01:21:42.281000 audit[3419]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3263 pid=3419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:42.281000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136396365663965323062656132383232663032303065343431323662 Jan 14 01:21:42.282000 audit: BPF prog-id=163 op=LOAD Jan 14 01:21:42.282000 audit[3419]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3263 pid=3419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:42.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136396365663965323062656132383232663032303065343431323662 Jan 14 01:21:42.282000 audit: BPF prog-id=164 op=LOAD Jan 14 01:21:42.282000 audit[3419]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3263 pid=3419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:42.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136396365663965323062656132383232663032303065343431323662 Jan 14 01:21:42.282000 audit: BPF prog-id=164 op=UNLOAD Jan 14 01:21:42.282000 audit[3419]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3263 pid=3419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:42.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136396365663965323062656132383232663032303065343431323662 Jan 14 01:21:42.283000 audit: BPF prog-id=163 op=UNLOAD Jan 14 01:21:42.283000 audit[3419]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3263 pid=3419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:42.283000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136396365663965323062656132383232663032303065343431323662 Jan 14 01:21:42.283000 audit: BPF prog-id=165 op=LOAD Jan 14 01:21:42.283000 audit[3419]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3263 pid=3419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:42.283000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136396365663965323062656132383232663032303065343431323662 Jan 14 01:21:42.338204 containerd[1611]: time="2026-01-14T01:21:42.338161855Z" level=info msg="StartContainer for \"169cef9e20bea2822f0200e44126ba59c832d3542610dbe0d38e9cba324a15f5\" returns successfully" Jan 14 01:21:42.393249 kubelet[2787]: E0114 01:21:42.393124 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qqxnp" podUID="ba3d93c2-390e-4ba5-bb19-4864194c73f7" Jan 14 01:21:42.515790 kubelet[2787]: E0114 01:21:42.514993 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:21:42.532310 kubelet[2787]: I0114 01:21:42.532188 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6899d499b8-c9qdb" podStartSLOduration=1.7913241279999998 podStartE2EDuration="4.532173064s" podCreationTimestamp="2026-01-14 01:21:38 +0000 UTC" firstStartedPulling="2026-01-14 01:21:39.415401968 +0000 UTC m=+23.192969023" lastFinishedPulling="2026-01-14 01:21:42.156250903 +0000 UTC m=+25.933817959" observedRunningTime="2026-01-14 01:21:42.532120784 +0000 UTC m=+26.309687839" watchObservedRunningTime="2026-01-14 01:21:42.532173064 +0000 UTC m=+26.309740119" Jan 14 01:21:42.582775 kubelet[2787]: E0114 01:21:42.582693 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:42.582775 kubelet[2787]: W0114 01:21:42.582770 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:42.583389 kubelet[2787]: E0114 01:21:42.582798 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:42.584007 kubelet[2787]: E0114 01:21:42.583772 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:42.584007 kubelet[2787]: W0114 01:21:42.583797 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:42.584007 kubelet[2787]: E0114 01:21:42.583907 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:42.585900 kubelet[2787]: E0114 01:21:42.585771 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:42.586015 kubelet[2787]: W0114 01:21:42.585916 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:42.586015 kubelet[2787]: E0114 01:21:42.585931 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:42.587072 kubelet[2787]: E0114 01:21:42.586894 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:42.587072 kubelet[2787]: W0114 01:21:42.586939 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:42.587072 kubelet[2787]: E0114 01:21:42.586951 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:42.588388 kubelet[2787]: E0114 01:21:42.587466 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:42.588388 kubelet[2787]: W0114 01:21:42.587475 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:42.588388 kubelet[2787]: E0114 01:21:42.587484 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:42.588388 kubelet[2787]: E0114 01:21:42.587949 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:42.588388 kubelet[2787]: W0114 01:21:42.587958 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:42.588388 kubelet[2787]: E0114 01:21:42.587968 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:42.588813 kubelet[2787]: E0114 01:21:42.588613 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:42.588813 kubelet[2787]: W0114 01:21:42.588624 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:42.588813 kubelet[2787]: E0114 01:21:42.588633 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:42.589207 kubelet[2787]: E0114 01:21:42.589144 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:42.589207 kubelet[2787]: W0114 01:21:42.589187 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:42.589207 kubelet[2787]: E0114 01:21:42.589197 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:42.590032 kubelet[2787]: E0114 01:21:42.589969 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:42.590165 kubelet[2787]: W0114 01:21:42.590114 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:42.590288 kubelet[2787]: E0114 01:21:42.590240 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:42.592382 kubelet[2787]: E0114 01:21:42.592335 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:42.592382 kubelet[2787]: W0114 01:21:42.592379 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:42.592454 kubelet[2787]: E0114 01:21:42.592398 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:42.592959 kubelet[2787]: E0114 01:21:42.592820 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:42.592959 kubelet[2787]: W0114 01:21:42.592889 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:42.592959 kubelet[2787]: E0114 01:21:42.592900 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:42.593374 kubelet[2787]: E0114 01:21:42.593335 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:42.593374 kubelet[2787]: W0114 01:21:42.593373 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:42.593496 kubelet[2787]: E0114 01:21:42.593383 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:42.593940 kubelet[2787]: E0114 01:21:42.593904 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:42.593940 kubelet[2787]: W0114 01:21:42.593917 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:42.593940 kubelet[2787]: E0114 01:21:42.593925 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:42.594713 kubelet[2787]: E0114 01:21:42.594600 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:42.594713 kubelet[2787]: W0114 01:21:42.594642 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:42.594713 kubelet[2787]: E0114 01:21:42.594689 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:42.595984 kubelet[2787]: E0114 01:21:42.595722 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:42.595984 kubelet[2787]: W0114 01:21:42.595780 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:42.595984 kubelet[2787]: E0114 01:21:42.595794 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:42.617246 kubelet[2787]: E0114 01:21:42.616757 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:42.617451 kubelet[2787]: W0114 01:21:42.617433 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:42.618093 kubelet[2787]: E0114 01:21:42.617608 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:42.619264 kubelet[2787]: E0114 01:21:42.618982 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:42.619264 kubelet[2787]: W0114 01:21:42.618997 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:42.619264 kubelet[2787]: E0114 01:21:42.619041 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:42.620449 kubelet[2787]: E0114 01:21:42.620268 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:42.620449 kubelet[2787]: W0114 01:21:42.620308 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:42.620449 kubelet[2787]: E0114 01:21:42.620320 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:42.621343 kubelet[2787]: E0114 01:21:42.621290 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:42.621343 kubelet[2787]: W0114 01:21:42.621337 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:42.621649 kubelet[2787]: E0114 01:21:42.621354 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:42.622908 kubelet[2787]: E0114 01:21:42.622585 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:42.622908 kubelet[2787]: W0114 01:21:42.622606 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:42.622908 kubelet[2787]: E0114 01:21:42.622623 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:42.624240 kubelet[2787]: E0114 01:21:42.624039 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:42.624240 kubelet[2787]: W0114 01:21:42.624060 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:42.624240 kubelet[2787]: E0114 01:21:42.624074 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:42.624827 kubelet[2787]: E0114 01:21:42.624766 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:42.624827 kubelet[2787]: W0114 01:21:42.624812 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:42.624900 kubelet[2787]: E0114 01:21:42.624850 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:42.625988 kubelet[2787]: E0114 01:21:42.625774 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:42.625988 kubelet[2787]: W0114 01:21:42.625815 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:42.625988 kubelet[2787]: E0114 01:21:42.625825 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:42.628558 kubelet[2787]: E0114 01:21:42.628427 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:42.628558 kubelet[2787]: W0114 01:21:42.628463 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:42.628558 kubelet[2787]: E0114 01:21:42.628474 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:42.629421 kubelet[2787]: E0114 01:21:42.629230 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:42.629421 kubelet[2787]: W0114 01:21:42.629290 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:42.629421 kubelet[2787]: E0114 01:21:42.629300 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:42.629998 kubelet[2787]: E0114 01:21:42.629780 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:42.629998 kubelet[2787]: W0114 01:21:42.629793 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:42.629998 kubelet[2787]: E0114 01:21:42.629803 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:42.630991 kubelet[2787]: E0114 01:21:42.630804 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:42.631790 kubelet[2787]: W0114 01:21:42.631700 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:42.631790 kubelet[2787]: E0114 01:21:42.631772 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:42.633036 kubelet[2787]: E0114 01:21:42.632886 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:42.633036 kubelet[2787]: W0114 01:21:42.632926 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:42.633036 kubelet[2787]: E0114 01:21:42.632936 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:42.634250 kubelet[2787]: E0114 01:21:42.633956 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:42.634250 kubelet[2787]: W0114 01:21:42.633995 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:42.634250 kubelet[2787]: E0114 01:21:42.634005 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:42.635433 kubelet[2787]: E0114 01:21:42.635386 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:42.635433 kubelet[2787]: W0114 01:21:42.635438 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:42.635433 kubelet[2787]: E0114 01:21:42.635453 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:42.636308 kubelet[2787]: E0114 01:21:42.636276 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:42.636308 kubelet[2787]: W0114 01:21:42.636295 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:42.636465 kubelet[2787]: E0114 01:21:42.636309 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:42.637478 kubelet[2787]: E0114 01:21:42.637444 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:42.637478 kubelet[2787]: W0114 01:21:42.637466 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:42.637606 kubelet[2787]: E0114 01:21:42.637480 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:42.638253 kubelet[2787]: E0114 01:21:42.638227 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:42.638253 kubelet[2787]: W0114 01:21:42.638242 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:42.638336 kubelet[2787]: E0114 01:21:42.638255 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:42.847193 containerd[1611]: time="2026-01-14T01:21:42.847072899Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:21:42.847990 containerd[1611]: time="2026-01-14T01:21:42.847935225Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=2517" Jan 14 01:21:42.849793 containerd[1611]: time="2026-01-14T01:21:42.849620207Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:21:42.852184 containerd[1611]: time="2026-01-14T01:21:42.852101401Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:21:42.853288 containerd[1611]: time="2026-01-14T01:21:42.852397113Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 696.045217ms" Jan 14 01:21:42.853288 containerd[1611]: time="2026-01-14T01:21:42.852450552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 14 01:21:42.858201 containerd[1611]: time="2026-01-14T01:21:42.858169787Z" level=info msg="CreateContainer within sandbox \"7005a825cc642e05696c3f5c3c2eb3128eb14408e5b2f2c24f62131c542a8ad9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 14 01:21:42.869906 containerd[1611]: time="2026-01-14T01:21:42.869674435Z" level=info msg="Container baa39f1b8c8af8e60e85aa83712c6c5e04282fbf23f7d8a8b8dbe540cb038620: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:21:42.881014 containerd[1611]: time="2026-01-14T01:21:42.880862098Z" level=info msg="CreateContainer within sandbox \"7005a825cc642e05696c3f5c3c2eb3128eb14408e5b2f2c24f62131c542a8ad9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"baa39f1b8c8af8e60e85aa83712c6c5e04282fbf23f7d8a8b8dbe540cb038620\"" Jan 14 01:21:42.881977 containerd[1611]: time="2026-01-14T01:21:42.881633662Z" level=info msg="StartContainer for \"baa39f1b8c8af8e60e85aa83712c6c5e04282fbf23f7d8a8b8dbe540cb038620\"" Jan 14 01:21:42.883294 containerd[1611]: time="2026-01-14T01:21:42.883194134Z" level=info msg="connecting to shim baa39f1b8c8af8e60e85aa83712c6c5e04282fbf23f7d8a8b8dbe540cb038620" address="unix:///run/containerd/s/698d34ad6847d97851061808812649728c49fb12f03ac9d3d0e0acc1546ea0a8" protocol=ttrpc version=3 Jan 14 01:21:42.920838 systemd[1]: Started cri-containerd-baa39f1b8c8af8e60e85aa83712c6c5e04282fbf23f7d8a8b8dbe540cb038620.scope - libcontainer container baa39f1b8c8af8e60e85aa83712c6c5e04282fbf23f7d8a8b8dbe540cb038620. Jan 14 01:21:43.005000 audit: BPF prog-id=166 op=LOAD Jan 14 01:21:43.005000 audit[3493]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3336 pid=3493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:43.005000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261613339663162386338616638653630653835616138333731326336 Jan 14 01:21:43.005000 audit: BPF prog-id=167 op=LOAD Jan 14 01:21:43.005000 audit[3493]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3336 pid=3493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:43.005000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261613339663162386338616638653630653835616138333731326336 Jan 14 01:21:43.005000 audit: BPF prog-id=167 op=UNLOAD Jan 14 01:21:43.005000 audit[3493]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=3493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:43.005000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261613339663162386338616638653630653835616138333731326336 Jan 14 01:21:43.005000 audit: BPF prog-id=166 op=UNLOAD Jan 14 01:21:43.005000 audit[3493]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=3493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:43.005000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261613339663162386338616638653630653835616138333731326336 Jan 14 01:21:43.005000 audit: BPF prog-id=168 op=LOAD Jan 14 01:21:43.005000 audit[3493]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3336 pid=3493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:43.005000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261613339663162386338616638653630653835616138333731326336 Jan 14 01:21:43.058365 containerd[1611]: time="2026-01-14T01:21:43.058267843Z" level=info msg="StartContainer for \"baa39f1b8c8af8e60e85aa83712c6c5e04282fbf23f7d8a8b8dbe540cb038620\" returns successfully" Jan 14 01:21:43.080700 systemd[1]: cri-containerd-baa39f1b8c8af8e60e85aa83712c6c5e04282fbf23f7d8a8b8dbe540cb038620.scope: Deactivated successfully. Jan 14 01:21:43.084609 containerd[1611]: time="2026-01-14T01:21:43.084459206Z" level=info msg="received container exit event container_id:\"baa39f1b8c8af8e60e85aa83712c6c5e04282fbf23f7d8a8b8dbe540cb038620\" id:\"baa39f1b8c8af8e60e85aa83712c6c5e04282fbf23f7d8a8b8dbe540cb038620\" pid:3508 exited_at:{seconds:1768353703 nanos:83325342}" Jan 14 01:21:43.087000 audit: BPF prog-id=168 op=UNLOAD Jan 14 01:21:43.521858 kubelet[2787]: I0114 01:21:43.521779 2787 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 01:21:43.523726 kubelet[2787]: E0114 01:21:43.522852 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:21:43.524606 kubelet[2787]: E0114 01:21:43.523860 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:21:43.526170 containerd[1611]: time="2026-01-14T01:21:43.525903208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 14 01:21:44.393336 kubelet[2787]: E0114 01:21:44.393207 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qqxnp" podUID="ba3d93c2-390e-4ba5-bb19-4864194c73f7" Jan 14 01:21:45.707896 containerd[1611]: time="2026-01-14T01:21:45.707814055Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:21:45.709734 containerd[1611]: time="2026-01-14T01:21:45.709314881Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 14 01:21:45.711030 containerd[1611]: time="2026-01-14T01:21:45.710884970Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:21:45.715579 containerd[1611]: time="2026-01-14T01:21:45.715412024Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:21:45.716349 containerd[1611]: time="2026-01-14T01:21:45.716272556Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.190280389s" Jan 14 01:21:45.716392 containerd[1611]: time="2026-01-14T01:21:45.716351173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 14 01:21:45.722933 containerd[1611]: time="2026-01-14T01:21:45.722868676Z" level=info msg="CreateContainer within sandbox \"7005a825cc642e05696c3f5c3c2eb3128eb14408e5b2f2c24f62131c542a8ad9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 14 01:21:45.735831 containerd[1611]: time="2026-01-14T01:21:45.735725614Z" level=info msg="Container 9a354ebd58d4b7f0f6f7362950881e1076efd07f831db04f3461d06dd52de6a1: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:21:45.753153 containerd[1611]: time="2026-01-14T01:21:45.753010717Z" level=info msg="CreateContainer within sandbox \"7005a825cc642e05696c3f5c3c2eb3128eb14408e5b2f2c24f62131c542a8ad9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9a354ebd58d4b7f0f6f7362950881e1076efd07f831db04f3461d06dd52de6a1\"" Jan 14 01:21:45.754042 containerd[1611]: time="2026-01-14T01:21:45.753843742Z" level=info msg="StartContainer for \"9a354ebd58d4b7f0f6f7362950881e1076efd07f831db04f3461d06dd52de6a1\"" Jan 14 01:21:45.755599 containerd[1611]: time="2026-01-14T01:21:45.755438299Z" level=info msg="connecting to shim 9a354ebd58d4b7f0f6f7362950881e1076efd07f831db04f3461d06dd52de6a1" address="unix:///run/containerd/s/698d34ad6847d97851061808812649728c49fb12f03ac9d3d0e0acc1546ea0a8" protocol=ttrpc version=3 Jan 14 01:21:45.786787 systemd[1]: Started cri-containerd-9a354ebd58d4b7f0f6f7362950881e1076efd07f831db04f3461d06dd52de6a1.scope - libcontainer container 9a354ebd58d4b7f0f6f7362950881e1076efd07f831db04f3461d06dd52de6a1. Jan 14 01:21:45.874000 audit: BPF prog-id=169 op=LOAD Jan 14 01:21:45.878805 kernel: kauditd_printk_skb: 78 callbacks suppressed Jan 14 01:21:45.878899 kernel: audit: type=1334 audit(1768353705.874:561): prog-id=169 op=LOAD Jan 14 01:21:45.874000 audit[3553]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3336 pid=3553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:45.895679 kernel: audit: type=1300 audit(1768353705.874:561): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3336 pid=3553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:45.895744 kernel: audit: type=1327 audit(1768353705.874:561): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961333534656264353864346237663066366637333632393530383831 Jan 14 01:21:45.874000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961333534656264353864346237663066366637333632393530383831 Jan 14 01:21:45.874000 audit: BPF prog-id=170 op=LOAD Jan 14 01:21:45.912273 kernel: audit: type=1334 audit(1768353705.874:562): prog-id=170 op=LOAD Jan 14 01:21:45.874000 audit[3553]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3336 pid=3553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:45.925962 kernel: audit: type=1300 audit(1768353705.874:562): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3336 pid=3553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:45.926060 kernel: audit: type=1327 audit(1768353705.874:562): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961333534656264353864346237663066366637333632393530383831 Jan 14 01:21:45.874000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961333534656264353864346237663066366637333632393530383831 Jan 14 01:21:45.874000 audit: BPF prog-id=170 op=UNLOAD Jan 14 01:21:45.942210 kernel: audit: type=1334 audit(1768353705.874:563): prog-id=170 op=UNLOAD Jan 14 01:21:45.942268 kernel: audit: type=1300 audit(1768353705.874:563): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=3553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:45.874000 audit[3553]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=3553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:45.954329 containerd[1611]: time="2026-01-14T01:21:45.954084493Z" level=info msg="StartContainer for \"9a354ebd58d4b7f0f6f7362950881e1076efd07f831db04f3461d06dd52de6a1\" returns successfully" Jan 14 01:21:45.874000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961333534656264353864346237663066366637333632393530383831 Jan 14 01:21:45.977072 kernel: audit: type=1327 audit(1768353705.874:563): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961333534656264353864346237663066366637333632393530383831 Jan 14 01:21:45.977279 kernel: audit: type=1334 audit(1768353705.874:564): prog-id=169 op=UNLOAD Jan 14 01:21:45.874000 audit: BPF prog-id=169 op=UNLOAD Jan 14 01:21:45.874000 audit[3553]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=3553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:45.874000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961333534656264353864346237663066366637333632393530383831 Jan 14 01:21:45.874000 audit: BPF prog-id=171 op=LOAD Jan 14 01:21:45.874000 audit[3553]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3336 pid=3553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:45.874000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961333534656264353864346237663066366637333632393530383831 Jan 14 01:21:46.361498 kubelet[2787]: I0114 01:21:46.361249 2787 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 01:21:46.362379 kubelet[2787]: E0114 01:21:46.361926 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:21:46.393643 kubelet[2787]: E0114 01:21:46.392823 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qqxnp" podUID="ba3d93c2-390e-4ba5-bb19-4864194c73f7" Jan 14 01:21:46.425000 audit[3587]: NETFILTER_CFG table=filter:117 family=2 entries=21 op=nft_register_rule pid=3587 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:46.425000 audit[3587]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffeb0f46d70 a2=0 a3=7ffeb0f46d5c items=0 ppid=2947 pid=3587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:46.425000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:46.436000 audit[3587]: NETFILTER_CFG table=nat:118 family=2 entries=19 op=nft_register_chain pid=3587 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:46.436000 audit[3587]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffeb0f46d70 a2=0 a3=7ffeb0f46d5c items=0 ppid=2947 pid=3587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:46.436000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:46.541004 kubelet[2787]: E0114 01:21:46.540894 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:21:46.541124 kubelet[2787]: E0114 01:21:46.541005 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:21:46.884796 systemd[1]: cri-containerd-9a354ebd58d4b7f0f6f7362950881e1076efd07f831db04f3461d06dd52de6a1.scope: Deactivated successfully. Jan 14 01:21:46.885403 systemd[1]: cri-containerd-9a354ebd58d4b7f0f6f7362950881e1076efd07f831db04f3461d06dd52de6a1.scope: Consumed 880ms CPU time, 179.2M memory peak, 3.1M read from disk, 171.3M written to disk. Jan 14 01:21:46.889894 containerd[1611]: time="2026-01-14T01:21:46.889199756Z" level=info msg="received container exit event container_id:\"9a354ebd58d4b7f0f6f7362950881e1076efd07f831db04f3461d06dd52de6a1\" id:\"9a354ebd58d4b7f0f6f7362950881e1076efd07f831db04f3461d06dd52de6a1\" pid:3566 exited_at:{seconds:1768353706 nanos:886752369}" Jan 14 01:21:46.889000 audit: BPF prog-id=171 op=UNLOAD Jan 14 01:21:46.961238 kubelet[2787]: I0114 01:21:46.960678 2787 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 14 01:21:46.983347 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a354ebd58d4b7f0f6f7362950881e1076efd07f831db04f3461d06dd52de6a1-rootfs.mount: Deactivated successfully. Jan 14 01:21:47.116911 systemd[1]: Created slice kubepods-burstable-pod04319d9b_642a_43be_8c0b_1ecdc12ac533.slice - libcontainer container kubepods-burstable-pod04319d9b_642a_43be_8c0b_1ecdc12ac533.slice. Jan 14 01:21:47.148012 systemd[1]: Created slice kubepods-besteffort-podf1b6ad95_23c6_499e_8a3d_6d0948845f18.slice - libcontainer container kubepods-besteffort-podf1b6ad95_23c6_499e_8a3d_6d0948845f18.slice. Jan 14 01:21:47.167974 kubelet[2787]: I0114 01:21:47.167040 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1b6ad95-23c6-499e-8a3d-6d0948845f18-whisker-ca-bundle\") pod \"whisker-55b9cd4d9f-58dh8\" (UID: \"f1b6ad95-23c6-499e-8a3d-6d0948845f18\") " pod="calico-system/whisker-55b9cd4d9f-58dh8" Jan 14 01:21:47.167974 kubelet[2787]: I0114 01:21:47.167107 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5d1e217-40e3-4ad0-82fb-7639214c6e0d-config-volume\") pod \"coredns-674b8bbfcf-t5t8t\" (UID: \"b5d1e217-40e3-4ad0-82fb-7639214c6e0d\") " pod="kube-system/coredns-674b8bbfcf-t5t8t" Jan 14 01:21:47.167974 kubelet[2787]: I0114 01:21:47.167130 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f1b6ad95-23c6-499e-8a3d-6d0948845f18-whisker-backend-key-pair\") pod \"whisker-55b9cd4d9f-58dh8\" (UID: \"f1b6ad95-23c6-499e-8a3d-6d0948845f18\") " pod="calico-system/whisker-55b9cd4d9f-58dh8" Jan 14 01:21:47.167974 kubelet[2787]: I0114 01:21:47.167155 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjz9n\" (UniqueName: \"kubernetes.io/projected/f1b6ad95-23c6-499e-8a3d-6d0948845f18-kube-api-access-jjz9n\") pod \"whisker-55b9cd4d9f-58dh8\" (UID: \"f1b6ad95-23c6-499e-8a3d-6d0948845f18\") " pod="calico-system/whisker-55b9cd4d9f-58dh8" Jan 14 01:21:47.167974 kubelet[2787]: I0114 01:21:47.167174 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfkxv\" (UniqueName: \"kubernetes.io/projected/04319d9b-642a-43be-8c0b-1ecdc12ac533-kube-api-access-sfkxv\") pod \"coredns-674b8bbfcf-7w7jk\" (UID: \"04319d9b-642a-43be-8c0b-1ecdc12ac533\") " pod="kube-system/coredns-674b8bbfcf-7w7jk" Jan 14 01:21:47.168211 kubelet[2787]: I0114 01:21:47.167194 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04319d9b-642a-43be-8c0b-1ecdc12ac533-config-volume\") pod \"coredns-674b8bbfcf-7w7jk\" (UID: \"04319d9b-642a-43be-8c0b-1ecdc12ac533\") " pod="kube-system/coredns-674b8bbfcf-7w7jk" Jan 14 01:21:47.168211 kubelet[2787]: I0114 01:21:47.167212 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp9d6\" (UniqueName: \"kubernetes.io/projected/b5d1e217-40e3-4ad0-82fb-7639214c6e0d-kube-api-access-zp9d6\") pod \"coredns-674b8bbfcf-t5t8t\" (UID: \"b5d1e217-40e3-4ad0-82fb-7639214c6e0d\") " pod="kube-system/coredns-674b8bbfcf-t5t8t" Jan 14 01:21:47.176874 systemd[1]: Created slice kubepods-burstable-podb5d1e217_40e3_4ad0_82fb_7639214c6e0d.slice - libcontainer container kubepods-burstable-podb5d1e217_40e3_4ad0_82fb_7639214c6e0d.slice. Jan 14 01:21:47.195237 systemd[1]: Created slice kubepods-besteffort-pod7724ac30_d973_433e_90c7_10adfa17a249.slice - libcontainer container kubepods-besteffort-pod7724ac30_d973_433e_90c7_10adfa17a249.slice. Jan 14 01:21:47.207679 systemd[1]: Created slice kubepods-besteffort-pod09905137_6883_4a25_b76e_d0608b4b6347.slice - libcontainer container kubepods-besteffort-pod09905137_6883_4a25_b76e_d0608b4b6347.slice. Jan 14 01:21:47.219203 systemd[1]: Created slice kubepods-besteffort-pod1bd888e4_98c6_46dd_883e_12946740dfe2.slice - libcontainer container kubepods-besteffort-pod1bd888e4_98c6_46dd_883e_12946740dfe2.slice. Jan 14 01:21:47.237787 systemd[1]: Created slice kubepods-besteffort-pod2e6b76b0_bbf3_4bda_8c0a_ac8224558858.slice - libcontainer container kubepods-besteffort-pod2e6b76b0_bbf3_4bda_8c0a_ac8224558858.slice. Jan 14 01:21:47.270790 kubelet[2787]: I0114 01:21:47.269947 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbg5p\" (UniqueName: \"kubernetes.io/projected/1bd888e4-98c6-46dd-883e-12946740dfe2-kube-api-access-sbg5p\") pod \"calico-apiserver-6c99cb9d5d-jz6rb\" (UID: \"1bd888e4-98c6-46dd-883e-12946740dfe2\") " pod="calico-apiserver/calico-apiserver-6c99cb9d5d-jz6rb" Jan 14 01:21:47.272118 kubelet[2787]: I0114 01:21:47.272095 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnvwj\" (UniqueName: \"kubernetes.io/projected/2e6b76b0-bbf3-4bda-8c0a-ac8224558858-kube-api-access-hnvwj\") pod \"calico-apiserver-6c99cb9d5d-kj4q4\" (UID: \"2e6b76b0-bbf3-4bda-8c0a-ac8224558858\") " pod="calico-apiserver/calico-apiserver-6c99cb9d5d-kj4q4" Jan 14 01:21:47.272258 kubelet[2787]: I0114 01:21:47.272237 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09905137-6883-4a25-b76e-d0608b4b6347-config\") pod \"goldmane-666569f655-x2sz9\" (UID: \"09905137-6883-4a25-b76e-d0608b4b6347\") " pod="calico-system/goldmane-666569f655-x2sz9" Jan 14 01:21:47.272919 kubelet[2787]: I0114 01:21:47.272870 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm9wn\" (UniqueName: \"kubernetes.io/projected/09905137-6883-4a25-b76e-d0608b4b6347-kube-api-access-xm9wn\") pod \"goldmane-666569f655-x2sz9\" (UID: \"09905137-6883-4a25-b76e-d0608b4b6347\") " pod="calico-system/goldmane-666569f655-x2sz9" Jan 14 01:21:47.273455 kubelet[2787]: I0114 01:21:47.273022 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09905137-6883-4a25-b76e-d0608b4b6347-goldmane-ca-bundle\") pod \"goldmane-666569f655-x2sz9\" (UID: \"09905137-6883-4a25-b76e-d0608b4b6347\") " pod="calico-system/goldmane-666569f655-x2sz9" Jan 14 01:21:47.273673 kubelet[2787]: I0114 01:21:47.273653 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2e6b76b0-bbf3-4bda-8c0a-ac8224558858-calico-apiserver-certs\") pod \"calico-apiserver-6c99cb9d5d-kj4q4\" (UID: \"2e6b76b0-bbf3-4bda-8c0a-ac8224558858\") " pod="calico-apiserver/calico-apiserver-6c99cb9d5d-kj4q4" Jan 14 01:21:47.273807 kubelet[2787]: I0114 01:21:47.273789 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/09905137-6883-4a25-b76e-d0608b4b6347-goldmane-key-pair\") pod \"goldmane-666569f655-x2sz9\" (UID: \"09905137-6883-4a25-b76e-d0608b4b6347\") " pod="calico-system/goldmane-666569f655-x2sz9" Jan 14 01:21:47.274664 kubelet[2787]: I0114 01:21:47.274641 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7724ac30-d973-433e-90c7-10adfa17a249-tigera-ca-bundle\") pod \"calico-kube-controllers-59555f9565-zxzlc\" (UID: \"7724ac30-d973-433e-90c7-10adfa17a249\") " pod="calico-system/calico-kube-controllers-59555f9565-zxzlc" Jan 14 01:21:47.274775 kubelet[2787]: I0114 01:21:47.274751 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1bd888e4-98c6-46dd-883e-12946740dfe2-calico-apiserver-certs\") pod \"calico-apiserver-6c99cb9d5d-jz6rb\" (UID: \"1bd888e4-98c6-46dd-883e-12946740dfe2\") " pod="calico-apiserver/calico-apiserver-6c99cb9d5d-jz6rb" Jan 14 01:21:47.274986 kubelet[2787]: I0114 01:21:47.274952 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgsfz\" (UniqueName: \"kubernetes.io/projected/7724ac30-d973-433e-90c7-10adfa17a249-kube-api-access-pgsfz\") pod \"calico-kube-controllers-59555f9565-zxzlc\" (UID: \"7724ac30-d973-433e-90c7-10adfa17a249\") " pod="calico-system/calico-kube-controllers-59555f9565-zxzlc" Jan 14 01:21:47.427624 kubelet[2787]: E0114 01:21:47.424958 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:21:47.428081 containerd[1611]: time="2026-01-14T01:21:47.427962916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7w7jk,Uid:04319d9b-642a-43be-8c0b-1ecdc12ac533,Namespace:kube-system,Attempt:0,}" Jan 14 01:21:47.472889 containerd[1611]: time="2026-01-14T01:21:47.472797494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55b9cd4d9f-58dh8,Uid:f1b6ad95-23c6-499e-8a3d-6d0948845f18,Namespace:calico-system,Attempt:0,}" Jan 14 01:21:47.485412 kubelet[2787]: E0114 01:21:47.485188 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:21:47.486644 containerd[1611]: time="2026-01-14T01:21:47.486478706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t5t8t,Uid:b5d1e217-40e3-4ad0-82fb-7639214c6e0d,Namespace:kube-system,Attempt:0,}" Jan 14 01:21:47.503582 containerd[1611]: time="2026-01-14T01:21:47.503459452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59555f9565-zxzlc,Uid:7724ac30-d973-433e-90c7-10adfa17a249,Namespace:calico-system,Attempt:0,}" Jan 14 01:21:47.515725 containerd[1611]: time="2026-01-14T01:21:47.514631720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-x2sz9,Uid:09905137-6883-4a25-b76e-d0608b4b6347,Namespace:calico-system,Attempt:0,}" Jan 14 01:21:47.533089 containerd[1611]: time="2026-01-14T01:21:47.532618074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c99cb9d5d-jz6rb,Uid:1bd888e4-98c6-46dd-883e-12946740dfe2,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:21:47.550127 containerd[1611]: time="2026-01-14T01:21:47.550027205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c99cb9d5d-kj4q4,Uid:2e6b76b0-bbf3-4bda-8c0a-ac8224558858,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:21:47.617291 kubelet[2787]: E0114 01:21:47.616984 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:21:47.638811 containerd[1611]: time="2026-01-14T01:21:47.635490978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 14 01:21:47.839331 containerd[1611]: time="2026-01-14T01:21:47.839288514Z" level=error msg="Failed to destroy network for sandbox \"80aeed048b506d68b88476df92332e4b8c8e45c9533e9e8f3f22667724300d75\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:21:47.857273 containerd[1611]: time="2026-01-14T01:21:47.857087151Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-x2sz9,Uid:09905137-6883-4a25-b76e-d0608b4b6347,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"80aeed048b506d68b88476df92332e4b8c8e45c9533e9e8f3f22667724300d75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:21:47.857796 containerd[1611]: time="2026-01-14T01:21:47.857708300Z" level=error msg="Failed to destroy network for sandbox \"e297de23e65dfd5b20243494eb33cf6a3d1a3c3a412c176d91fc99fa2311eb06\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:21:47.858107 containerd[1611]: time="2026-01-14T01:21:47.857280869Z" level=error msg="Failed to destroy network for sandbox \"a4f85941a3e1db9407f0c30f4be090e173bc2c34f932d850789886d6a73cd9e2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:21:47.858630 containerd[1611]: time="2026-01-14T01:21:47.858133278Z" level=error msg="Failed to destroy network for sandbox \"842570da302f12be56ad49920a0d1ea968998651d6e7e35819be16467ce19a12\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:21:47.859334 kubelet[2787]: E0114 01:21:47.859095 2787 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80aeed048b506d68b88476df92332e4b8c8e45c9533e9e8f3f22667724300d75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:21:47.861796 kubelet[2787]: E0114 01:21:47.861634 2787 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80aeed048b506d68b88476df92332e4b8c8e45c9533e9e8f3f22667724300d75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-x2sz9" Jan 14 01:21:47.861796 kubelet[2787]: E0114 01:21:47.861690 2787 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80aeed048b506d68b88476df92332e4b8c8e45c9533e9e8f3f22667724300d75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-x2sz9" Jan 14 01:21:47.862637 kubelet[2787]: E0114 01:21:47.861918 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-x2sz9_calico-system(09905137-6883-4a25-b76e-d0608b4b6347)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-x2sz9_calico-system(09905137-6883-4a25-b76e-d0608b4b6347)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"80aeed048b506d68b88476df92332e4b8c8e45c9533e9e8f3f22667724300d75\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-x2sz9" podUID="09905137-6883-4a25-b76e-d0608b4b6347" Jan 14 01:21:47.863647 containerd[1611]: time="2026-01-14T01:21:47.863618100Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55b9cd4d9f-58dh8,Uid:f1b6ad95-23c6-499e-8a3d-6d0948845f18,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e297de23e65dfd5b20243494eb33cf6a3d1a3c3a412c176d91fc99fa2311eb06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:21:47.865216 kubelet[2787]: E0114 01:21:47.864863 2787 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e297de23e65dfd5b20243494eb33cf6a3d1a3c3a412c176d91fc99fa2311eb06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:21:47.865779 kubelet[2787]: E0114 01:21:47.865490 2787 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e297de23e65dfd5b20243494eb33cf6a3d1a3c3a412c176d91fc99fa2311eb06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-55b9cd4d9f-58dh8" Jan 14 01:21:47.866203 kubelet[2787]: E0114 01:21:47.865964 2787 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e297de23e65dfd5b20243494eb33cf6a3d1a3c3a412c176d91fc99fa2311eb06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-55b9cd4d9f-58dh8" Jan 14 01:21:47.866203 kubelet[2787]: E0114 01:21:47.866132 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-55b9cd4d9f-58dh8_calico-system(f1b6ad95-23c6-499e-8a3d-6d0948845f18)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-55b9cd4d9f-58dh8_calico-system(f1b6ad95-23c6-499e-8a3d-6d0948845f18)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e297de23e65dfd5b20243494eb33cf6a3d1a3c3a412c176d91fc99fa2311eb06\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-55b9cd4d9f-58dh8" podUID="f1b6ad95-23c6-499e-8a3d-6d0948845f18" Jan 14 01:21:47.868734 containerd[1611]: time="2026-01-14T01:21:47.868334232Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c99cb9d5d-kj4q4,Uid:2e6b76b0-bbf3-4bda-8c0a-ac8224558858,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"842570da302f12be56ad49920a0d1ea968998651d6e7e35819be16467ce19a12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:21:47.870626 kubelet[2787]: E0114 01:21:47.869299 2787 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"842570da302f12be56ad49920a0d1ea968998651d6e7e35819be16467ce19a12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:21:47.870626 kubelet[2787]: E0114 01:21:47.869930 2787 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"842570da302f12be56ad49920a0d1ea968998651d6e7e35819be16467ce19a12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c99cb9d5d-kj4q4" Jan 14 01:21:47.870626 kubelet[2787]: E0114 01:21:47.869967 2787 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"842570da302f12be56ad49920a0d1ea968998651d6e7e35819be16467ce19a12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c99cb9d5d-kj4q4" Jan 14 01:21:47.870796 kubelet[2787]: E0114 01:21:47.870236 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c99cb9d5d-kj4q4_calico-apiserver(2e6b76b0-bbf3-4bda-8c0a-ac8224558858)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c99cb9d5d-kj4q4_calico-apiserver(2e6b76b0-bbf3-4bda-8c0a-ac8224558858)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"842570da302f12be56ad49920a0d1ea968998651d6e7e35819be16467ce19a12\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c99cb9d5d-kj4q4" podUID="2e6b76b0-bbf3-4bda-8c0a-ac8224558858" Jan 14 01:21:47.870981 containerd[1611]: time="2026-01-14T01:21:47.870640169Z" level=error msg="Failed to destroy network for sandbox \"e14e2462f5a6a27572989a1657f768bb396b47b6d719a75037c1e10bc408335f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:21:47.872138 containerd[1611]: time="2026-01-14T01:21:47.871653947Z" level=error msg="Failed to destroy network for sandbox \"d47ac21b8ddf12ef03c894a70f18e92cf9661bdba87a5a54b74f6d2cc842f164\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:21:47.875924 containerd[1611]: time="2026-01-14T01:21:47.875824769Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7w7jk,Uid:04319d9b-642a-43be-8c0b-1ecdc12ac533,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4f85941a3e1db9407f0c30f4be090e173bc2c34f932d850789886d6a73cd9e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:21:47.876179 kubelet[2787]: E0114 01:21:47.876123 2787 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4f85941a3e1db9407f0c30f4be090e173bc2c34f932d850789886d6a73cd9e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:21:47.876224 kubelet[2787]: E0114 01:21:47.876177 2787 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4f85941a3e1db9407f0c30f4be090e173bc2c34f932d850789886d6a73cd9e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-7w7jk" Jan 14 01:21:47.876224 kubelet[2787]: E0114 01:21:47.876204 2787 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4f85941a3e1db9407f0c30f4be090e173bc2c34f932d850789886d6a73cd9e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-7w7jk" Jan 14 01:21:47.876332 kubelet[2787]: E0114 01:21:47.876259 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-7w7jk_kube-system(04319d9b-642a-43be-8c0b-1ecdc12ac533)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-7w7jk_kube-system(04319d9b-642a-43be-8c0b-1ecdc12ac533)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a4f85941a3e1db9407f0c30f4be090e173bc2c34f932d850789886d6a73cd9e2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-7w7jk" podUID="04319d9b-642a-43be-8c0b-1ecdc12ac533" Jan 14 01:21:47.883117 containerd[1611]: time="2026-01-14T01:21:47.882932526Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59555f9565-zxzlc,Uid:7724ac30-d973-433e-90c7-10adfa17a249,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d47ac21b8ddf12ef03c894a70f18e92cf9661bdba87a5a54b74f6d2cc842f164\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:21:47.883631 containerd[1611]: time="2026-01-14T01:21:47.883580323Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t5t8t,Uid:b5d1e217-40e3-4ad0-82fb-7639214c6e0d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e14e2462f5a6a27572989a1657f768bb396b47b6d719a75037c1e10bc408335f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:21:47.883795 kubelet[2787]: E0114 01:21:47.883446 2787 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d47ac21b8ddf12ef03c894a70f18e92cf9661bdba87a5a54b74f6d2cc842f164\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:21:47.883859 kubelet[2787]: E0114 01:21:47.883803 2787 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d47ac21b8ddf12ef03c894a70f18e92cf9661bdba87a5a54b74f6d2cc842f164\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59555f9565-zxzlc" Jan 14 01:21:47.883859 kubelet[2787]: E0114 01:21:47.883831 2787 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d47ac21b8ddf12ef03c894a70f18e92cf9661bdba87a5a54b74f6d2cc842f164\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59555f9565-zxzlc" Jan 14 01:21:47.883946 kubelet[2787]: E0114 01:21:47.883872 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-59555f9565-zxzlc_calico-system(7724ac30-d973-433e-90c7-10adfa17a249)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-59555f9565-zxzlc_calico-system(7724ac30-d973-433e-90c7-10adfa17a249)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d47ac21b8ddf12ef03c894a70f18e92cf9661bdba87a5a54b74f6d2cc842f164\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59555f9565-zxzlc" podUID="7724ac30-d973-433e-90c7-10adfa17a249" Jan 14 01:21:47.884363 kubelet[2787]: E0114 01:21:47.884212 2787 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e14e2462f5a6a27572989a1657f768bb396b47b6d719a75037c1e10bc408335f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:21:47.884363 kubelet[2787]: E0114 01:21:47.884272 2787 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e14e2462f5a6a27572989a1657f768bb396b47b6d719a75037c1e10bc408335f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-t5t8t" Jan 14 01:21:47.884363 kubelet[2787]: E0114 01:21:47.884299 2787 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e14e2462f5a6a27572989a1657f768bb396b47b6d719a75037c1e10bc408335f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-t5t8t" Jan 14 01:21:47.884682 kubelet[2787]: E0114 01:21:47.884336 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-t5t8t_kube-system(b5d1e217-40e3-4ad0-82fb-7639214c6e0d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-t5t8t_kube-system(b5d1e217-40e3-4ad0-82fb-7639214c6e0d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e14e2462f5a6a27572989a1657f768bb396b47b6d719a75037c1e10bc408335f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-t5t8t" podUID="b5d1e217-40e3-4ad0-82fb-7639214c6e0d" Jan 14 01:21:47.886109 containerd[1611]: time="2026-01-14T01:21:47.885896176Z" level=error msg="Failed to destroy network for sandbox \"fd528d7a9a816666d722b21c200e5e908e20df7968edff0aa3ab1713dce3df5c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:21:47.892441 containerd[1611]: time="2026-01-14T01:21:47.892191652Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c99cb9d5d-jz6rb,Uid:1bd888e4-98c6-46dd-883e-12946740dfe2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd528d7a9a816666d722b21c200e5e908e20df7968edff0aa3ab1713dce3df5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:21:47.894597 kubelet[2787]: E0114 01:21:47.893638 2787 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd528d7a9a816666d722b21c200e5e908e20df7968edff0aa3ab1713dce3df5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:21:47.894597 kubelet[2787]: E0114 01:21:47.893776 2787 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd528d7a9a816666d722b21c200e5e908e20df7968edff0aa3ab1713dce3df5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c99cb9d5d-jz6rb" Jan 14 01:21:47.894597 kubelet[2787]: E0114 01:21:47.893803 2787 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd528d7a9a816666d722b21c200e5e908e20df7968edff0aa3ab1713dce3df5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c99cb9d5d-jz6rb" Jan 14 01:21:47.894771 kubelet[2787]: E0114 01:21:47.893856 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c99cb9d5d-jz6rb_calico-apiserver(1bd888e4-98c6-46dd-883e-12946740dfe2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c99cb9d5d-jz6rb_calico-apiserver(1bd888e4-98c6-46dd-883e-12946740dfe2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd528d7a9a816666d722b21c200e5e908e20df7968edff0aa3ab1713dce3df5c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c99cb9d5d-jz6rb" podUID="1bd888e4-98c6-46dd-883e-12946740dfe2" Jan 14 01:21:48.402400 systemd[1]: Created slice kubepods-besteffort-podba3d93c2_390e_4ba5_bb19_4864194c73f7.slice - libcontainer container kubepods-besteffort-podba3d93c2_390e_4ba5_bb19_4864194c73f7.slice. Jan 14 01:21:48.407227 containerd[1611]: time="2026-01-14T01:21:48.407129329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qqxnp,Uid:ba3d93c2-390e-4ba5-bb19-4864194c73f7,Namespace:calico-system,Attempt:0,}" Jan 14 01:21:48.546069 containerd[1611]: time="2026-01-14T01:21:48.545915926Z" level=error msg="Failed to destroy network for sandbox \"2632d955565be6eac04a53a37a07de4485fa71ecfe0fe81c368e258af1bfce49\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:21:48.549620 systemd[1]: run-netns-cni\x2dda37c7fe\x2d803b\x2d54e8\x2d85ea\x2d69961b5cb122.mount: Deactivated successfully. Jan 14 01:21:48.550406 containerd[1611]: time="2026-01-14T01:21:48.550246894Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qqxnp,Uid:ba3d93c2-390e-4ba5-bb19-4864194c73f7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2632d955565be6eac04a53a37a07de4485fa71ecfe0fe81c368e258af1bfce49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:21:48.550821 kubelet[2787]: E0114 01:21:48.550735 2787 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2632d955565be6eac04a53a37a07de4485fa71ecfe0fe81c368e258af1bfce49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:21:48.551677 kubelet[2787]: E0114 01:21:48.550823 2787 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2632d955565be6eac04a53a37a07de4485fa71ecfe0fe81c368e258af1bfce49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qqxnp" Jan 14 01:21:48.551677 kubelet[2787]: E0114 01:21:48.550846 2787 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2632d955565be6eac04a53a37a07de4485fa71ecfe0fe81c368e258af1bfce49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qqxnp" Jan 14 01:21:48.551677 kubelet[2787]: E0114 01:21:48.551321 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qqxnp_calico-system(ba3d93c2-390e-4ba5-bb19-4864194c73f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qqxnp_calico-system(ba3d93c2-390e-4ba5-bb19-4864194c73f7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2632d955565be6eac04a53a37a07de4485fa71ecfe0fe81c368e258af1bfce49\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qqxnp" podUID="ba3d93c2-390e-4ba5-bb19-4864194c73f7" Jan 14 01:21:55.447996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2018785504.mount: Deactivated successfully. Jan 14 01:21:55.617706 containerd[1611]: time="2026-01-14T01:21:55.617565493Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:21:55.619209 containerd[1611]: time="2026-01-14T01:21:55.619148131Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 14 01:21:55.620600 containerd[1611]: time="2026-01-14T01:21:55.620498005Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:21:55.623464 containerd[1611]: time="2026-01-14T01:21:55.623338324Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:21:55.624164 containerd[1611]: time="2026-01-14T01:21:55.624082608Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.986384379s" Jan 14 01:21:55.624164 containerd[1611]: time="2026-01-14T01:21:55.624147897Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 14 01:21:55.645207 containerd[1611]: time="2026-01-14T01:21:55.645048636Z" level=info msg="CreateContainer within sandbox \"7005a825cc642e05696c3f5c3c2eb3128eb14408e5b2f2c24f62131c542a8ad9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 14 01:21:55.659046 containerd[1611]: time="2026-01-14T01:21:55.658835289Z" level=info msg="Container ed5aeed5d83048a3c6a72854f0a0acbc4c01c0030c1e945aade3fd3d63f56dfe: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:21:55.687954 containerd[1611]: time="2026-01-14T01:21:55.687842725Z" level=info msg="CreateContainer within sandbox \"7005a825cc642e05696c3f5c3c2eb3128eb14408e5b2f2c24f62131c542a8ad9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ed5aeed5d83048a3c6a72854f0a0acbc4c01c0030c1e945aade3fd3d63f56dfe\"" Jan 14 01:21:55.688943 containerd[1611]: time="2026-01-14T01:21:55.688893641Z" level=info msg="StartContainer for \"ed5aeed5d83048a3c6a72854f0a0acbc4c01c0030c1e945aade3fd3d63f56dfe\"" Jan 14 01:21:55.691150 containerd[1611]: time="2026-01-14T01:21:55.691093697Z" level=info msg="connecting to shim ed5aeed5d83048a3c6a72854f0a0acbc4c01c0030c1e945aade3fd3d63f56dfe" address="unix:///run/containerd/s/698d34ad6847d97851061808812649728c49fb12f03ac9d3d0e0acc1546ea0a8" protocol=ttrpc version=3 Jan 14 01:21:55.726036 systemd[1]: Started cri-containerd-ed5aeed5d83048a3c6a72854f0a0acbc4c01c0030c1e945aade3fd3d63f56dfe.scope - libcontainer container ed5aeed5d83048a3c6a72854f0a0acbc4c01c0030c1e945aade3fd3d63f56dfe. Jan 14 01:21:55.811000 audit: BPF prog-id=172 op=LOAD Jan 14 01:21:55.815604 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 14 01:21:55.816592 kernel: audit: type=1334 audit(1768353715.811:569): prog-id=172 op=LOAD Jan 14 01:21:55.811000 audit[3884]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3336 pid=3884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:55.834751 kernel: audit: type=1300 audit(1768353715.811:569): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3336 pid=3884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:55.834895 kernel: audit: type=1327 audit(1768353715.811:569): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564356165656435643833303438613363366137323835346630613061 Jan 14 01:21:55.811000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564356165656435643833303438613363366137323835346630613061 Jan 14 01:21:55.811000 audit: BPF prog-id=173 op=LOAD Jan 14 01:21:55.849609 kernel: audit: type=1334 audit(1768353715.811:570): prog-id=173 op=LOAD Jan 14 01:21:55.849717 kernel: audit: type=1300 audit(1768353715.811:570): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3336 pid=3884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:55.811000 audit[3884]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3336 pid=3884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:55.811000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564356165656435643833303438613363366137323835346630613061 Jan 14 01:21:55.873305 kernel: audit: type=1327 audit(1768353715.811:570): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564356165656435643833303438613363366137323835346630613061 Jan 14 01:21:55.874701 kernel: audit: type=1334 audit(1768353715.811:571): prog-id=173 op=UNLOAD Jan 14 01:21:55.811000 audit: BPF prog-id=173 op=UNLOAD Jan 14 01:21:55.811000 audit[3884]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=3884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:55.887027 kernel: audit: type=1300 audit(1768353715.811:571): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=3884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:55.811000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564356165656435643833303438613363366137323835346630613061 Jan 14 01:21:55.889798 containerd[1611]: time="2026-01-14T01:21:55.888166046Z" level=info msg="StartContainer for \"ed5aeed5d83048a3c6a72854f0a0acbc4c01c0030c1e945aade3fd3d63f56dfe\" returns successfully" Jan 14 01:21:55.900474 kernel: audit: type=1327 audit(1768353715.811:571): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564356165656435643833303438613363366137323835346630613061 Jan 14 01:21:55.900709 kernel: audit: type=1334 audit(1768353715.811:572): prog-id=172 op=UNLOAD Jan 14 01:21:55.811000 audit: BPF prog-id=172 op=UNLOAD Jan 14 01:21:55.811000 audit[3884]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=3884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:55.811000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564356165656435643833303438613363366137323835346630613061 Jan 14 01:21:55.812000 audit: BPF prog-id=174 op=LOAD Jan 14 01:21:55.812000 audit[3884]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3336 pid=3884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:55.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564356165656435643833303438613363366137323835346630613061 Jan 14 01:21:56.003204 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 14 01:21:56.003441 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 14 01:21:56.260002 kubelet[2787]: I0114 01:21:56.259837 2787 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f1b6ad95-23c6-499e-8a3d-6d0948845f18-whisker-backend-key-pair\") pod \"f1b6ad95-23c6-499e-8a3d-6d0948845f18\" (UID: \"f1b6ad95-23c6-499e-8a3d-6d0948845f18\") " Jan 14 01:21:56.260002 kubelet[2787]: I0114 01:21:56.259937 2787 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjz9n\" (UniqueName: \"kubernetes.io/projected/f1b6ad95-23c6-499e-8a3d-6d0948845f18-kube-api-access-jjz9n\") pod \"f1b6ad95-23c6-499e-8a3d-6d0948845f18\" (UID: \"f1b6ad95-23c6-499e-8a3d-6d0948845f18\") " Jan 14 01:21:56.260002 kubelet[2787]: I0114 01:21:56.259979 2787 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1b6ad95-23c6-499e-8a3d-6d0948845f18-whisker-ca-bundle\") pod \"f1b6ad95-23c6-499e-8a3d-6d0948845f18\" (UID: \"f1b6ad95-23c6-499e-8a3d-6d0948845f18\") " Jan 14 01:21:56.260867 kubelet[2787]: I0114 01:21:56.260673 2787 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1b6ad95-23c6-499e-8a3d-6d0948845f18-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f1b6ad95-23c6-499e-8a3d-6d0948845f18" (UID: "f1b6ad95-23c6-499e-8a3d-6d0948845f18"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 14 01:21:56.267752 kubelet[2787]: I0114 01:21:56.267575 2787 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1b6ad95-23c6-499e-8a3d-6d0948845f18-kube-api-access-jjz9n" (OuterVolumeSpecName: "kube-api-access-jjz9n") pod "f1b6ad95-23c6-499e-8a3d-6d0948845f18" (UID: "f1b6ad95-23c6-499e-8a3d-6d0948845f18"). InnerVolumeSpecName "kube-api-access-jjz9n". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 14 01:21:56.268745 kubelet[2787]: I0114 01:21:56.268583 2787 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1b6ad95-23c6-499e-8a3d-6d0948845f18-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f1b6ad95-23c6-499e-8a3d-6d0948845f18" (UID: "f1b6ad95-23c6-499e-8a3d-6d0948845f18"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 14 01:21:56.360906 kubelet[2787]: I0114 01:21:56.360842 2787 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f1b6ad95-23c6-499e-8a3d-6d0948845f18-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 14 01:21:56.360906 kubelet[2787]: I0114 01:21:56.360911 2787 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jjz9n\" (UniqueName: \"kubernetes.io/projected/f1b6ad95-23c6-499e-8a3d-6d0948845f18-kube-api-access-jjz9n\") on node \"localhost\" DevicePath \"\"" Jan 14 01:21:56.360906 kubelet[2787]: I0114 01:21:56.360930 2787 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1b6ad95-23c6-499e-8a3d-6d0948845f18-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 14 01:21:56.406375 systemd[1]: Removed slice kubepods-besteffort-podf1b6ad95_23c6_499e_8a3d_6d0948845f18.slice - libcontainer container kubepods-besteffort-podf1b6ad95_23c6_499e_8a3d_6d0948845f18.slice. Jan 14 01:21:56.449746 systemd[1]: var-lib-kubelet-pods-f1b6ad95\x2d23c6\x2d499e\x2d8a3d\x2d6d0948845f18-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djjz9n.mount: Deactivated successfully. Jan 14 01:21:56.449904 systemd[1]: var-lib-kubelet-pods-f1b6ad95\x2d23c6\x2d499e\x2d8a3d\x2d6d0948845f18-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 14 01:21:56.639358 kubelet[2787]: E0114 01:21:56.639068 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:21:56.662605 kubelet[2787]: I0114 01:21:56.662481 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8swx8" podStartSLOduration=2.526905578 podStartE2EDuration="18.662465803s" podCreationTimestamp="2026-01-14 01:21:38 +0000 UTC" firstStartedPulling="2026-01-14 01:21:39.489840776 +0000 UTC m=+23.267407831" lastFinishedPulling="2026-01-14 01:21:55.625401001 +0000 UTC m=+39.402968056" observedRunningTime="2026-01-14 01:21:56.660297986 +0000 UTC m=+40.437865061" watchObservedRunningTime="2026-01-14 01:21:56.662465803 +0000 UTC m=+40.440032857" Jan 14 01:21:56.752714 systemd[1]: Created slice kubepods-besteffort-pod0c6080b7_a312_4044_afca_8c80fd4d65bc.slice - libcontainer container kubepods-besteffort-pod0c6080b7_a312_4044_afca_8c80fd4d65bc.slice. Jan 14 01:21:56.866276 kubelet[2787]: I0114 01:21:56.866178 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcbl8\" (UniqueName: \"kubernetes.io/projected/0c6080b7-a312-4044-afca-8c80fd4d65bc-kube-api-access-wcbl8\") pod \"whisker-c69b4ddbc-mp7cc\" (UID: \"0c6080b7-a312-4044-afca-8c80fd4d65bc\") " pod="calico-system/whisker-c69b4ddbc-mp7cc" Jan 14 01:21:56.866276 kubelet[2787]: I0114 01:21:56.866250 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0c6080b7-a312-4044-afca-8c80fd4d65bc-whisker-backend-key-pair\") pod \"whisker-c69b4ddbc-mp7cc\" (UID: \"0c6080b7-a312-4044-afca-8c80fd4d65bc\") " pod="calico-system/whisker-c69b4ddbc-mp7cc" Jan 14 01:21:56.866276 kubelet[2787]: I0114 01:21:56.866268 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c6080b7-a312-4044-afca-8c80fd4d65bc-whisker-ca-bundle\") pod \"whisker-c69b4ddbc-mp7cc\" (UID: \"0c6080b7-a312-4044-afca-8c80fd4d65bc\") " pod="calico-system/whisker-c69b4ddbc-mp7cc" Jan 14 01:21:57.060627 containerd[1611]: time="2026-01-14T01:21:57.060419030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c69b4ddbc-mp7cc,Uid:0c6080b7-a312-4044-afca-8c80fd4d65bc,Namespace:calico-system,Attempt:0,}" Jan 14 01:21:57.307830 systemd-networkd[1516]: califeef95731a0: Link UP Jan 14 01:21:57.308253 systemd-networkd[1516]: califeef95731a0: Gained carrier Jan 14 01:21:57.322321 containerd[1611]: 2026-01-14 01:21:57.103 [INFO][3977] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 01:21:57.322321 containerd[1611]: 2026-01-14 01:21:57.135 [INFO][3977] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--c69b4ddbc--mp7cc-eth0 whisker-c69b4ddbc- calico-system 0c6080b7-a312-4044-afca-8c80fd4d65bc 916 0 2026-01-14 01:21:56 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:c69b4ddbc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-c69b4ddbc-mp7cc eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] califeef95731a0 [] [] }} ContainerID="94c3b28d96b3fcefa6e8070ebeca252313a5a9afb092e64446849b382fca1367" Namespace="calico-system" Pod="whisker-c69b4ddbc-mp7cc" WorkloadEndpoint="localhost-k8s-whisker--c69b4ddbc--mp7cc-" Jan 14 01:21:57.322321 containerd[1611]: 2026-01-14 01:21:57.135 [INFO][3977] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="94c3b28d96b3fcefa6e8070ebeca252313a5a9afb092e64446849b382fca1367" Namespace="calico-system" Pod="whisker-c69b4ddbc-mp7cc" WorkloadEndpoint="localhost-k8s-whisker--c69b4ddbc--mp7cc-eth0" Jan 14 01:21:57.322321 containerd[1611]: 2026-01-14 01:21:57.240 [INFO][3992] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="94c3b28d96b3fcefa6e8070ebeca252313a5a9afb092e64446849b382fca1367" HandleID="k8s-pod-network.94c3b28d96b3fcefa6e8070ebeca252313a5a9afb092e64446849b382fca1367" Workload="localhost-k8s-whisker--c69b4ddbc--mp7cc-eth0" Jan 14 01:21:57.322635 containerd[1611]: 2026-01-14 01:21:57.241 [INFO][3992] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="94c3b28d96b3fcefa6e8070ebeca252313a5a9afb092e64446849b382fca1367" HandleID="k8s-pod-network.94c3b28d96b3fcefa6e8070ebeca252313a5a9afb092e64446849b382fca1367" Workload="localhost-k8s-whisker--c69b4ddbc--mp7cc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00012d700), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-c69b4ddbc-mp7cc", "timestamp":"2026-01-14 01:21:57.240275885 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:21:57.322635 containerd[1611]: 2026-01-14 01:21:57.241 [INFO][3992] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:21:57.322635 containerd[1611]: 2026-01-14 01:21:57.242 [INFO][3992] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:21:57.322635 containerd[1611]: 2026-01-14 01:21:57.242 [INFO][3992] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 01:21:57.322635 containerd[1611]: 2026-01-14 01:21:57.253 [INFO][3992] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.94c3b28d96b3fcefa6e8070ebeca252313a5a9afb092e64446849b382fca1367" host="localhost" Jan 14 01:21:57.322635 containerd[1611]: 2026-01-14 01:21:57.263 [INFO][3992] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 01:21:57.322635 containerd[1611]: 2026-01-14 01:21:57.271 [INFO][3992] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 01:21:57.322635 containerd[1611]: 2026-01-14 01:21:57.274 [INFO][3992] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 01:21:57.322635 containerd[1611]: 2026-01-14 01:21:57.277 [INFO][3992] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 01:21:57.322635 containerd[1611]: 2026-01-14 01:21:57.277 [INFO][3992] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.94c3b28d96b3fcefa6e8070ebeca252313a5a9afb092e64446849b382fca1367" host="localhost" Jan 14 01:21:57.322902 containerd[1611]: 2026-01-14 01:21:57.280 [INFO][3992] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.94c3b28d96b3fcefa6e8070ebeca252313a5a9afb092e64446849b382fca1367 Jan 14 01:21:57.322902 containerd[1611]: 2026-01-14 01:21:57.284 [INFO][3992] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.94c3b28d96b3fcefa6e8070ebeca252313a5a9afb092e64446849b382fca1367" host="localhost" Jan 14 01:21:57.322902 containerd[1611]: 2026-01-14 01:21:57.291 [INFO][3992] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.94c3b28d96b3fcefa6e8070ebeca252313a5a9afb092e64446849b382fca1367" host="localhost" Jan 14 01:21:57.322902 containerd[1611]: 2026-01-14 01:21:57.291 [INFO][3992] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.94c3b28d96b3fcefa6e8070ebeca252313a5a9afb092e64446849b382fca1367" host="localhost" Jan 14 01:21:57.322902 containerd[1611]: 2026-01-14 01:21:57.291 [INFO][3992] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:21:57.322902 containerd[1611]: 2026-01-14 01:21:57.291 [INFO][3992] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="94c3b28d96b3fcefa6e8070ebeca252313a5a9afb092e64446849b382fca1367" HandleID="k8s-pod-network.94c3b28d96b3fcefa6e8070ebeca252313a5a9afb092e64446849b382fca1367" Workload="localhost-k8s-whisker--c69b4ddbc--mp7cc-eth0" Jan 14 01:21:57.323011 containerd[1611]: 2026-01-14 01:21:57.295 [INFO][3977] cni-plugin/k8s.go 418: Populated endpoint ContainerID="94c3b28d96b3fcefa6e8070ebeca252313a5a9afb092e64446849b382fca1367" Namespace="calico-system" Pod="whisker-c69b4ddbc-mp7cc" WorkloadEndpoint="localhost-k8s-whisker--c69b4ddbc--mp7cc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--c69b4ddbc--mp7cc-eth0", GenerateName:"whisker-c69b4ddbc-", Namespace:"calico-system", SelfLink:"", UID:"0c6080b7-a312-4044-afca-8c80fd4d65bc", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 21, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"c69b4ddbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-c69b4ddbc-mp7cc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califeef95731a0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:21:57.323011 containerd[1611]: 2026-01-14 01:21:57.295 [INFO][3977] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="94c3b28d96b3fcefa6e8070ebeca252313a5a9afb092e64446849b382fca1367" Namespace="calico-system" Pod="whisker-c69b4ddbc-mp7cc" WorkloadEndpoint="localhost-k8s-whisker--c69b4ddbc--mp7cc-eth0" Jan 14 01:21:57.323116 containerd[1611]: 2026-01-14 01:21:57.295 [INFO][3977] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califeef95731a0 ContainerID="94c3b28d96b3fcefa6e8070ebeca252313a5a9afb092e64446849b382fca1367" Namespace="calico-system" Pod="whisker-c69b4ddbc-mp7cc" WorkloadEndpoint="localhost-k8s-whisker--c69b4ddbc--mp7cc-eth0" Jan 14 01:21:57.323116 containerd[1611]: 2026-01-14 01:21:57.308 [INFO][3977] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="94c3b28d96b3fcefa6e8070ebeca252313a5a9afb092e64446849b382fca1367" Namespace="calico-system" Pod="whisker-c69b4ddbc-mp7cc" WorkloadEndpoint="localhost-k8s-whisker--c69b4ddbc--mp7cc-eth0" Jan 14 01:21:57.323163 containerd[1611]: 2026-01-14 01:21:57.308 [INFO][3977] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="94c3b28d96b3fcefa6e8070ebeca252313a5a9afb092e64446849b382fca1367" Namespace="calico-system" Pod="whisker-c69b4ddbc-mp7cc" WorkloadEndpoint="localhost-k8s-whisker--c69b4ddbc--mp7cc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--c69b4ddbc--mp7cc-eth0", GenerateName:"whisker-c69b4ddbc-", Namespace:"calico-system", SelfLink:"", UID:"0c6080b7-a312-4044-afca-8c80fd4d65bc", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 21, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"c69b4ddbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"94c3b28d96b3fcefa6e8070ebeca252313a5a9afb092e64446849b382fca1367", Pod:"whisker-c69b4ddbc-mp7cc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califeef95731a0", MAC:"5e:45:7d:dd:f2:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:21:57.323240 containerd[1611]: 2026-01-14 01:21:57.319 [INFO][3977] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="94c3b28d96b3fcefa6e8070ebeca252313a5a9afb092e64446849b382fca1367" Namespace="calico-system" Pod="whisker-c69b4ddbc-mp7cc" WorkloadEndpoint="localhost-k8s-whisker--c69b4ddbc--mp7cc-eth0" Jan 14 01:21:57.521931 containerd[1611]: time="2026-01-14T01:21:57.521717784Z" level=info msg="connecting to shim 94c3b28d96b3fcefa6e8070ebeca252313a5a9afb092e64446849b382fca1367" address="unix:///run/containerd/s/eac267e9b54365b3978ffa3e1143ba7c601318f61b8d74b176c066e1e37130cc" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:21:57.600633 systemd[1]: Started cri-containerd-94c3b28d96b3fcefa6e8070ebeca252313a5a9afb092e64446849b382fca1367.scope - libcontainer container 94c3b28d96b3fcefa6e8070ebeca252313a5a9afb092e64446849b382fca1367. Jan 14 01:21:57.647491 kubelet[2787]: E0114 01:21:57.646851 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:21:57.668000 audit: BPF prog-id=175 op=LOAD Jan 14 01:21:57.669000 audit: BPF prog-id=176 op=LOAD Jan 14 01:21:57.669000 audit[4120]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=4107 pid=4120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:57.669000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934633362323864393662336663656661366538303730656265636132 Jan 14 01:21:57.670000 audit: BPF prog-id=176 op=UNLOAD Jan 14 01:21:57.670000 audit[4120]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4107 pid=4120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:57.670000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934633362323864393662336663656661366538303730656265636132 Jan 14 01:21:57.670000 audit: BPF prog-id=177 op=LOAD Jan 14 01:21:57.670000 audit[4120]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=4107 pid=4120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:57.670000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934633362323864393662336663656661366538303730656265636132 Jan 14 01:21:57.671000 audit: BPF prog-id=178 op=LOAD Jan 14 01:21:57.671000 audit[4120]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=4107 pid=4120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:57.671000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934633362323864393662336663656661366538303730656265636132 Jan 14 01:21:57.671000 audit: BPF prog-id=178 op=UNLOAD Jan 14 01:21:57.671000 audit[4120]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4107 pid=4120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:57.671000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934633362323864393662336663656661366538303730656265636132 Jan 14 01:21:57.671000 audit: BPF prog-id=177 op=UNLOAD Jan 14 01:21:57.671000 audit[4120]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4107 pid=4120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:57.671000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934633362323864393662336663656661366538303730656265636132 Jan 14 01:21:57.671000 audit: BPF prog-id=179 op=LOAD Jan 14 01:21:57.671000 audit[4120]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=4107 pid=4120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:57.671000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934633362323864393662336663656661366538303730656265636132 Jan 14 01:21:57.678405 systemd-resolved[1295]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 01:21:57.766125 containerd[1611]: time="2026-01-14T01:21:57.765762399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c69b4ddbc-mp7cc,Uid:0c6080b7-a312-4044-afca-8c80fd4d65bc,Namespace:calico-system,Attempt:0,} returns sandbox id \"94c3b28d96b3fcefa6e8070ebeca252313a5a9afb092e64446849b382fca1367\"" Jan 14 01:21:57.768983 containerd[1611]: time="2026-01-14T01:21:57.768297233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:21:57.827664 containerd[1611]: time="2026-01-14T01:21:57.827612614Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:21:57.829311 containerd[1611]: time="2026-01-14T01:21:57.829264625Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:21:57.832180 kubelet[2787]: E0114 01:21:57.831892 2787 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:21:57.832180 kubelet[2787]: E0114 01:21:57.831991 2787 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:21:57.839962 kubelet[2787]: E0114 01:21:57.839880 2787 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a68a300fc64e45a2b1bba454e6e6db2f,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wcbl8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-c69b4ddbc-mp7cc_calico-system(0c6080b7-a312-4044-afca-8c80fd4d65bc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:21:57.839000 audit: BPF prog-id=180 op=LOAD Jan 14 01:21:57.839000 audit[4203]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffd543c3b0 a2=98 a3=1fffffffffffffff items=0 ppid=4021 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:57.839000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:21:57.839000 audit: BPF prog-id=180 op=UNLOAD Jan 14 01:21:57.839000 audit[4203]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fffd543c380 a3=0 items=0 ppid=4021 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:57.839000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:21:57.839000 audit: BPF prog-id=181 op=LOAD Jan 14 01:21:57.839000 audit[4203]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffd543c290 a2=94 a3=3 items=0 ppid=4021 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:57.839000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:21:57.840000 audit: BPF prog-id=181 op=UNLOAD Jan 14 01:21:57.840000 audit[4203]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fffd543c290 a2=94 a3=3 items=0 ppid=4021 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:57.840000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:21:57.840000 audit: BPF prog-id=182 op=LOAD Jan 14 01:21:57.840000 audit[4203]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffd543c2d0 a2=94 a3=7fffd543c4b0 items=0 ppid=4021 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:57.840000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:21:57.840000 audit: BPF prog-id=182 op=UNLOAD Jan 14 01:21:57.840000 audit[4203]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fffd543c2d0 a2=94 a3=7fffd543c4b0 items=0 ppid=4021 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:57.840000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:21:57.846000 audit: BPF prog-id=183 op=LOAD Jan 14 01:21:57.846000 audit[4204]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe0e3fa9d0 a2=98 a3=3 items=0 ppid=4021 pid=4204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:57.846000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:21:57.846000 audit: BPF prog-id=183 op=UNLOAD Jan 14 01:21:57.846000 audit[4204]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe0e3fa9a0 a3=0 items=0 ppid=4021 pid=4204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:57.846000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:21:57.846000 audit: BPF prog-id=184 op=LOAD Jan 14 01:21:57.846000 audit[4204]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe0e3fa7c0 a2=94 a3=54428f items=0 ppid=4021 pid=4204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:57.846000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:21:57.846000 audit: BPF prog-id=184 op=UNLOAD Jan 14 01:21:57.846000 audit[4204]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe0e3fa7c0 a2=94 a3=54428f items=0 ppid=4021 pid=4204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:57.846000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:21:57.846000 audit: BPF prog-id=185 op=LOAD Jan 14 01:21:57.846000 audit[4204]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe0e3fa7f0 a2=94 a3=2 items=0 ppid=4021 pid=4204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:57.846000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:21:57.847000 audit: BPF prog-id=185 op=UNLOAD Jan 14 01:21:57.847000 audit[4204]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe0e3fa7f0 a2=0 a3=2 items=0 ppid=4021 pid=4204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:57.847000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:21:57.856894 containerd[1611]: time="2026-01-14T01:21:57.829312562Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:21:57.856894 containerd[1611]: time="2026-01-14T01:21:57.842691327Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:21:57.948152 containerd[1611]: time="2026-01-14T01:21:57.948069178Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:21:57.950208 containerd[1611]: time="2026-01-14T01:21:57.950069194Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:21:57.950208 containerd[1611]: time="2026-01-14T01:21:57.950163920Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:21:57.950434 kubelet[2787]: E0114 01:21:57.950350 2787 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:21:57.950634 kubelet[2787]: E0114 01:21:57.950429 2787 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:21:57.951581 kubelet[2787]: E0114 01:21:57.950912 2787 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wcbl8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-c69b4ddbc-mp7cc_calico-system(0c6080b7-a312-4044-afca-8c80fd4d65bc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:21:57.952450 kubelet[2787]: E0114 01:21:57.952301 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c69b4ddbc-mp7cc" podUID="0c6080b7-a312-4044-afca-8c80fd4d65bc" Jan 14 01:21:58.124000 audit: BPF prog-id=186 op=LOAD Jan 14 01:21:58.124000 audit[4204]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe0e3fa6b0 a2=94 a3=1 items=0 ppid=4021 pid=4204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.124000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:21:58.124000 audit: BPF prog-id=186 op=UNLOAD Jan 14 01:21:58.124000 audit[4204]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe0e3fa6b0 a2=94 a3=1 items=0 ppid=4021 pid=4204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.124000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:21:58.139000 audit: BPF prog-id=187 op=LOAD Jan 14 01:21:58.139000 audit[4204]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe0e3fa6a0 a2=94 a3=4 items=0 ppid=4021 pid=4204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.139000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:21:58.139000 audit: BPF prog-id=187 op=UNLOAD Jan 14 01:21:58.139000 audit[4204]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe0e3fa6a0 a2=0 a3=4 items=0 ppid=4021 pid=4204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.139000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:21:58.139000 audit: BPF prog-id=188 op=LOAD Jan 14 01:21:58.139000 audit[4204]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe0e3fa500 a2=94 a3=5 items=0 ppid=4021 pid=4204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.139000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:21:58.139000 audit: BPF prog-id=188 op=UNLOAD Jan 14 01:21:58.139000 audit[4204]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe0e3fa500 a2=0 a3=5 items=0 ppid=4021 pid=4204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.139000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:21:58.139000 audit: BPF prog-id=189 op=LOAD Jan 14 01:21:58.139000 audit[4204]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe0e3fa720 a2=94 a3=6 items=0 ppid=4021 pid=4204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.139000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:21:58.140000 audit: BPF prog-id=189 op=UNLOAD Jan 14 01:21:58.140000 audit[4204]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe0e3fa720 a2=0 a3=6 items=0 ppid=4021 pid=4204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.140000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:21:58.140000 audit: BPF prog-id=190 op=LOAD Jan 14 01:21:58.140000 audit[4204]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe0e3f9ed0 a2=94 a3=88 items=0 ppid=4021 pid=4204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.140000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:21:58.140000 audit: BPF prog-id=191 op=LOAD Jan 14 01:21:58.140000 audit[4204]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffe0e3f9d50 a2=94 a3=2 items=0 ppid=4021 pid=4204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.140000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:21:58.140000 audit: BPF prog-id=191 op=UNLOAD Jan 14 01:21:58.140000 audit[4204]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffe0e3f9d80 a2=0 a3=7ffe0e3f9e80 items=0 ppid=4021 pid=4204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.140000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:21:58.141000 audit: BPF prog-id=190 op=UNLOAD Jan 14 01:21:58.141000 audit[4204]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=2d7a4d10 a2=0 a3=e28999da93e6ea55 items=0 ppid=4021 pid=4204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.141000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:21:58.156000 audit: BPF prog-id=192 op=LOAD Jan 14 01:21:58.156000 audit[4209]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff6983fc40 a2=98 a3=1999999999999999 items=0 ppid=4021 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.156000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:21:58.156000 audit: BPF prog-id=192 op=UNLOAD Jan 14 01:21:58.156000 audit[4209]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff6983fc10 a3=0 items=0 ppid=4021 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.156000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:21:58.156000 audit: BPF prog-id=193 op=LOAD Jan 14 01:21:58.156000 audit[4209]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff6983fb20 a2=94 a3=ffff items=0 ppid=4021 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.156000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:21:58.156000 audit: BPF prog-id=193 op=UNLOAD Jan 14 01:21:58.156000 audit[4209]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff6983fb20 a2=94 a3=ffff items=0 ppid=4021 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.156000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:21:58.156000 audit: BPF prog-id=194 op=LOAD Jan 14 01:21:58.156000 audit[4209]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff6983fb60 a2=94 a3=7fff6983fd40 items=0 ppid=4021 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.156000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:21:58.156000 audit: BPF prog-id=194 op=UNLOAD Jan 14 01:21:58.156000 audit[4209]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff6983fb60 a2=94 a3=7fff6983fd40 items=0 ppid=4021 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.156000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:21:58.237892 systemd-networkd[1516]: vxlan.calico: Link UP Jan 14 01:21:58.237907 systemd-networkd[1516]: vxlan.calico: Gained carrier Jan 14 01:21:58.289000 audit: BPF prog-id=195 op=LOAD Jan 14 01:21:58.289000 audit[4236]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc50dc5400 a2=98 a3=0 items=0 ppid=4021 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.289000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:21:58.289000 audit: BPF prog-id=195 op=UNLOAD Jan 14 01:21:58.289000 audit[4236]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc50dc53d0 a3=0 items=0 ppid=4021 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.289000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:21:58.289000 audit: BPF prog-id=196 op=LOAD Jan 14 01:21:58.289000 audit[4236]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc50dc5210 a2=94 a3=54428f items=0 ppid=4021 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.289000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:21:58.289000 audit: BPF prog-id=196 op=UNLOAD Jan 14 01:21:58.289000 audit[4236]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc50dc5210 a2=94 a3=54428f items=0 ppid=4021 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.289000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:21:58.289000 audit: BPF prog-id=197 op=LOAD Jan 14 01:21:58.289000 audit[4236]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc50dc5240 a2=94 a3=2 items=0 ppid=4021 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.289000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:21:58.289000 audit: BPF prog-id=197 op=UNLOAD Jan 14 01:21:58.289000 audit[4236]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc50dc5240 a2=0 a3=2 items=0 ppid=4021 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.289000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:21:58.289000 audit: BPF prog-id=198 op=LOAD Jan 14 01:21:58.289000 audit[4236]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc50dc4ff0 a2=94 a3=4 items=0 ppid=4021 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.289000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:21:58.289000 audit: BPF prog-id=198 op=UNLOAD Jan 14 01:21:58.289000 audit[4236]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc50dc4ff0 a2=94 a3=4 items=0 ppid=4021 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.289000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:21:58.289000 audit: BPF prog-id=199 op=LOAD Jan 14 01:21:58.289000 audit[4236]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc50dc50f0 a2=94 a3=7ffc50dc5270 items=0 ppid=4021 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.289000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:21:58.290000 audit: BPF prog-id=199 op=UNLOAD Jan 14 01:21:58.290000 audit[4236]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc50dc50f0 a2=0 a3=7ffc50dc5270 items=0 ppid=4021 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.290000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:21:58.291000 audit: BPF prog-id=200 op=LOAD Jan 14 01:21:58.291000 audit[4236]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc50dc4820 a2=94 a3=2 items=0 ppid=4021 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.291000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:21:58.291000 audit: BPF prog-id=200 op=UNLOAD Jan 14 01:21:58.291000 audit[4236]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc50dc4820 a2=0 a3=2 items=0 ppid=4021 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.291000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:21:58.291000 audit: BPF prog-id=201 op=LOAD Jan 14 01:21:58.291000 audit[4236]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc50dc4920 a2=94 a3=30 items=0 ppid=4021 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.291000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:21:58.301000 audit: BPF prog-id=202 op=LOAD Jan 14 01:21:58.301000 audit[4244]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdc4c3f000 a2=98 a3=0 items=0 ppid=4021 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.301000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:21:58.301000 audit: BPF prog-id=202 op=UNLOAD Jan 14 01:21:58.301000 audit[4244]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffdc4c3efd0 a3=0 items=0 ppid=4021 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.301000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:21:58.302000 audit: BPF prog-id=203 op=LOAD Jan 14 01:21:58.302000 audit[4244]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdc4c3edf0 a2=94 a3=54428f items=0 ppid=4021 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.302000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:21:58.302000 audit: BPF prog-id=203 op=UNLOAD Jan 14 01:21:58.302000 audit[4244]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffdc4c3edf0 a2=94 a3=54428f items=0 ppid=4021 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.302000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:21:58.302000 audit: BPF prog-id=204 op=LOAD Jan 14 01:21:58.302000 audit[4244]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdc4c3ee20 a2=94 a3=2 items=0 ppid=4021 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.302000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:21:58.302000 audit: BPF prog-id=204 op=UNLOAD Jan 14 01:21:58.302000 audit[4244]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffdc4c3ee20 a2=0 a3=2 items=0 ppid=4021 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.302000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:21:58.402813 kubelet[2787]: I0114 01:21:58.401109 2787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1b6ad95-23c6-499e-8a3d-6d0948845f18" path="/var/lib/kubelet/pods/f1b6ad95-23c6-499e-8a3d-6d0948845f18/volumes" Jan 14 01:21:58.428732 systemd-networkd[1516]: califeef95731a0: Gained IPv6LL Jan 14 01:21:58.496000 audit: BPF prog-id=205 op=LOAD Jan 14 01:21:58.496000 audit[4244]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdc4c3ece0 a2=94 a3=1 items=0 ppid=4021 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.496000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:21:58.496000 audit: BPF prog-id=205 op=UNLOAD Jan 14 01:21:58.496000 audit[4244]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffdc4c3ece0 a2=94 a3=1 items=0 ppid=4021 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.496000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:21:58.506000 audit: BPF prog-id=206 op=LOAD Jan 14 01:21:58.506000 audit[4244]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffdc4c3ecd0 a2=94 a3=4 items=0 ppid=4021 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.506000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:21:58.506000 audit: BPF prog-id=206 op=UNLOAD Jan 14 01:21:58.506000 audit[4244]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffdc4c3ecd0 a2=0 a3=4 items=0 ppid=4021 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.506000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:21:58.506000 audit: BPF prog-id=207 op=LOAD Jan 14 01:21:58.506000 audit[4244]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdc4c3eb30 a2=94 a3=5 items=0 ppid=4021 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.506000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:21:58.506000 audit: BPF prog-id=207 op=UNLOAD Jan 14 01:21:58.506000 audit[4244]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffdc4c3eb30 a2=0 a3=5 items=0 ppid=4021 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.506000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:21:58.506000 audit: BPF prog-id=208 op=LOAD Jan 14 01:21:58.506000 audit[4244]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffdc4c3ed50 a2=94 a3=6 items=0 ppid=4021 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.506000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:21:58.506000 audit: BPF prog-id=208 op=UNLOAD Jan 14 01:21:58.506000 audit[4244]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffdc4c3ed50 a2=0 a3=6 items=0 ppid=4021 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.506000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:21:58.507000 audit: BPF prog-id=209 op=LOAD Jan 14 01:21:58.507000 audit[4244]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffdc4c3e500 a2=94 a3=88 items=0 ppid=4021 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.507000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:21:58.507000 audit: BPF prog-id=210 op=LOAD Jan 14 01:21:58.507000 audit[4244]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffdc4c3e380 a2=94 a3=2 items=0 ppid=4021 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.507000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:21:58.507000 audit: BPF prog-id=210 op=UNLOAD Jan 14 01:21:58.507000 audit[4244]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffdc4c3e3b0 a2=0 a3=7ffdc4c3e4b0 items=0 ppid=4021 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.507000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:21:58.507000 audit: BPF prog-id=209 op=UNLOAD Jan 14 01:21:58.507000 audit[4244]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=2ea96d10 a2=0 a3=ec8e5ca9b5a5c0ff items=0 ppid=4021 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.507000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:21:58.519000 audit: BPF prog-id=201 op=UNLOAD Jan 14 01:21:58.519000 audit[4021]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000907680 a2=0 a3=0 items=0 ppid=4013 pid=4021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.519000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 14 01:21:58.593000 audit[4272]: NETFILTER_CFG table=nat:119 family=2 entries=15 op=nft_register_chain pid=4272 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:21:58.593000 audit[4272]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7fff0c172b80 a2=0 a3=7fff0c172b6c items=0 ppid=4021 pid=4272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.593000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:21:58.594000 audit[4271]: NETFILTER_CFG table=mangle:120 family=2 entries=16 op=nft_register_chain pid=4271 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:21:58.594000 audit[4271]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffe594bbf20 a2=0 a3=55559e66f000 items=0 ppid=4021 pid=4271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.594000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:21:58.601000 audit[4270]: NETFILTER_CFG table=raw:121 family=2 entries=21 op=nft_register_chain pid=4270 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:21:58.601000 audit[4270]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffe1cb91a00 a2=0 a3=7ffe1cb919ec items=0 ppid=4021 pid=4270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.601000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:21:58.609000 audit[4274]: NETFILTER_CFG table=filter:122 family=2 entries=94 op=nft_register_chain pid=4274 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:21:58.609000 audit[4274]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffffa71b2d0 a2=0 a3=7ffffa71b2bc items=0 ppid=4021 pid=4274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.609000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:21:58.652491 kubelet[2787]: E0114 01:21:58.652214 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:21:58.655332 kubelet[2787]: E0114 01:21:58.654331 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c69b4ddbc-mp7cc" podUID="0c6080b7-a312-4044-afca-8c80fd4d65bc" Jan 14 01:21:58.697000 audit[4301]: NETFILTER_CFG table=filter:123 family=2 entries=20 op=nft_register_rule pid=4301 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:58.697000 audit[4301]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe1461b0b0 a2=0 a3=7ffe1461b09c items=0 ppid=2947 pid=4301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.697000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:58.705000 audit[4301]: NETFILTER_CFG table=nat:124 family=2 entries=14 op=nft_register_rule pid=4301 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:58.705000 audit[4301]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe1461b0b0 a2=0 a3=0 items=0 ppid=2947 pid=4301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:58.705000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:59.393397 kubelet[2787]: E0114 01:21:59.393135 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:21:59.394336 containerd[1611]: time="2026-01-14T01:21:59.394047647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t5t8t,Uid:b5d1e217-40e3-4ad0-82fb-7639214c6e0d,Namespace:kube-system,Attempt:0,}" Jan 14 01:21:59.548752 systemd-networkd[1516]: calic3f31242b45: Link UP Jan 14 01:21:59.549499 systemd-networkd[1516]: calic3f31242b45: Gained carrier Jan 14 01:21:59.565775 containerd[1611]: 2026-01-14 01:21:59.458 [INFO][4311] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--t5t8t-eth0 coredns-674b8bbfcf- kube-system b5d1e217-40e3-4ad0-82fb-7639214c6e0d 842 0 2026-01-14 01:21:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-t5t8t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic3f31242b45 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f12e83f7f4ca859b9c38276375bd21da4d390b4e1cb503609ad718f32f9ba8b0" Namespace="kube-system" Pod="coredns-674b8bbfcf-t5t8t" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--t5t8t-" Jan 14 01:21:59.565775 containerd[1611]: 2026-01-14 01:21:59.459 [INFO][4311] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f12e83f7f4ca859b9c38276375bd21da4d390b4e1cb503609ad718f32f9ba8b0" Namespace="kube-system" Pod="coredns-674b8bbfcf-t5t8t" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--t5t8t-eth0" Jan 14 01:21:59.565775 containerd[1611]: 2026-01-14 01:21:59.502 [INFO][4325] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f12e83f7f4ca859b9c38276375bd21da4d390b4e1cb503609ad718f32f9ba8b0" HandleID="k8s-pod-network.f12e83f7f4ca859b9c38276375bd21da4d390b4e1cb503609ad718f32f9ba8b0" Workload="localhost-k8s-coredns--674b8bbfcf--t5t8t-eth0" Jan 14 01:21:59.565986 containerd[1611]: 2026-01-14 01:21:59.503 [INFO][4325] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f12e83f7f4ca859b9c38276375bd21da4d390b4e1cb503609ad718f32f9ba8b0" HandleID="k8s-pod-network.f12e83f7f4ca859b9c38276375bd21da4d390b4e1cb503609ad718f32f9ba8b0" Workload="localhost-k8s-coredns--674b8bbfcf--t5t8t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00059e4c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-t5t8t", "timestamp":"2026-01-14 01:21:59.502785759 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:21:59.565986 containerd[1611]: 2026-01-14 01:21:59.503 [INFO][4325] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:21:59.565986 containerd[1611]: 2026-01-14 01:21:59.503 [INFO][4325] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:21:59.565986 containerd[1611]: 2026-01-14 01:21:59.503 [INFO][4325] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 01:21:59.565986 containerd[1611]: 2026-01-14 01:21:59.510 [INFO][4325] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f12e83f7f4ca859b9c38276375bd21da4d390b4e1cb503609ad718f32f9ba8b0" host="localhost" Jan 14 01:21:59.565986 containerd[1611]: 2026-01-14 01:21:59.517 [INFO][4325] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 01:21:59.565986 containerd[1611]: 2026-01-14 01:21:59.523 [INFO][4325] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 01:21:59.565986 containerd[1611]: 2026-01-14 01:21:59.525 [INFO][4325] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 01:21:59.565986 containerd[1611]: 2026-01-14 01:21:59.528 [INFO][4325] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 01:21:59.565986 containerd[1611]: 2026-01-14 01:21:59.528 [INFO][4325] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f12e83f7f4ca859b9c38276375bd21da4d390b4e1cb503609ad718f32f9ba8b0" host="localhost" Jan 14 01:21:59.566235 containerd[1611]: 2026-01-14 01:21:59.530 [INFO][4325] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f12e83f7f4ca859b9c38276375bd21da4d390b4e1cb503609ad718f32f9ba8b0 Jan 14 01:21:59.566235 containerd[1611]: 2026-01-14 01:21:59.535 [INFO][4325] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f12e83f7f4ca859b9c38276375bd21da4d390b4e1cb503609ad718f32f9ba8b0" host="localhost" Jan 14 01:21:59.566235 containerd[1611]: 2026-01-14 01:21:59.541 [INFO][4325] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.f12e83f7f4ca859b9c38276375bd21da4d390b4e1cb503609ad718f32f9ba8b0" host="localhost" Jan 14 01:21:59.566235 containerd[1611]: 2026-01-14 01:21:59.541 [INFO][4325] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.f12e83f7f4ca859b9c38276375bd21da4d390b4e1cb503609ad718f32f9ba8b0" host="localhost" Jan 14 01:21:59.566235 containerd[1611]: 2026-01-14 01:21:59.542 [INFO][4325] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:21:59.566235 containerd[1611]: 2026-01-14 01:21:59.542 [INFO][4325] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="f12e83f7f4ca859b9c38276375bd21da4d390b4e1cb503609ad718f32f9ba8b0" HandleID="k8s-pod-network.f12e83f7f4ca859b9c38276375bd21da4d390b4e1cb503609ad718f32f9ba8b0" Workload="localhost-k8s-coredns--674b8bbfcf--t5t8t-eth0" Jan 14 01:21:59.566341 containerd[1611]: 2026-01-14 01:21:59.545 [INFO][4311] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f12e83f7f4ca859b9c38276375bd21da4d390b4e1cb503609ad718f32f9ba8b0" Namespace="kube-system" Pod="coredns-674b8bbfcf-t5t8t" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--t5t8t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--t5t8t-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b5d1e217-40e3-4ad0-82fb-7639214c6e0d", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 21, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-t5t8t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic3f31242b45", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:21:59.566424 containerd[1611]: 2026-01-14 01:21:59.545 [INFO][4311] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="f12e83f7f4ca859b9c38276375bd21da4d390b4e1cb503609ad718f32f9ba8b0" Namespace="kube-system" Pod="coredns-674b8bbfcf-t5t8t" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--t5t8t-eth0" Jan 14 01:21:59.566424 containerd[1611]: 2026-01-14 01:21:59.545 [INFO][4311] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic3f31242b45 ContainerID="f12e83f7f4ca859b9c38276375bd21da4d390b4e1cb503609ad718f32f9ba8b0" Namespace="kube-system" Pod="coredns-674b8bbfcf-t5t8t" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--t5t8t-eth0" Jan 14 01:21:59.566424 containerd[1611]: 2026-01-14 01:21:59.549 [INFO][4311] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f12e83f7f4ca859b9c38276375bd21da4d390b4e1cb503609ad718f32f9ba8b0" Namespace="kube-system" Pod="coredns-674b8bbfcf-t5t8t" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--t5t8t-eth0" Jan 14 01:21:59.566479 containerd[1611]: 2026-01-14 01:21:59.550 [INFO][4311] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f12e83f7f4ca859b9c38276375bd21da4d390b4e1cb503609ad718f32f9ba8b0" Namespace="kube-system" Pod="coredns-674b8bbfcf-t5t8t" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--t5t8t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--t5t8t-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b5d1e217-40e3-4ad0-82fb-7639214c6e0d", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 21, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f12e83f7f4ca859b9c38276375bd21da4d390b4e1cb503609ad718f32f9ba8b0", Pod:"coredns-674b8bbfcf-t5t8t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic3f31242b45", MAC:"7e:4e:24:38:e3:11", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:21:59.566479 containerd[1611]: 2026-01-14 01:21:59.560 [INFO][4311] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f12e83f7f4ca859b9c38276375bd21da4d390b4e1cb503609ad718f32f9ba8b0" Namespace="kube-system" Pod="coredns-674b8bbfcf-t5t8t" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--t5t8t-eth0" Jan 14 01:21:59.584000 audit[4341]: NETFILTER_CFG table=filter:125 family=2 entries=42 op=nft_register_chain pid=4341 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:21:59.584000 audit[4341]: SYSCALL arch=c000003e syscall=46 success=yes exit=22552 a0=3 a1=7ffd893713f0 a2=0 a3=7ffd893713dc items=0 ppid=4021 pid=4341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:59.584000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:21:59.599809 containerd[1611]: time="2026-01-14T01:21:59.599733868Z" level=info msg="connecting to shim f12e83f7f4ca859b9c38276375bd21da4d390b4e1cb503609ad718f32f9ba8b0" address="unix:///run/containerd/s/e2d2ac6d86b2cc3401e99529caf773faf475b94dc72638663ce2f9846fba46c4" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:21:59.649246 systemd[1]: Started cri-containerd-f12e83f7f4ca859b9c38276375bd21da4d390b4e1cb503609ad718f32f9ba8b0.scope - libcontainer container f12e83f7f4ca859b9c38276375bd21da4d390b4e1cb503609ad718f32f9ba8b0. Jan 14 01:21:59.656854 kubelet[2787]: E0114 01:21:59.656738 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c69b4ddbc-mp7cc" podUID="0c6080b7-a312-4044-afca-8c80fd4d65bc" Jan 14 01:21:59.687000 audit: BPF prog-id=211 op=LOAD Jan 14 01:21:59.688000 audit: BPF prog-id=212 op=LOAD Jan 14 01:21:59.688000 audit[4360]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4350 pid=4360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:59.688000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631326538336637663463613835396239633338323736333735626432 Jan 14 01:21:59.688000 audit: BPF prog-id=212 op=UNLOAD Jan 14 01:21:59.688000 audit[4360]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4350 pid=4360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:59.688000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631326538336637663463613835396239633338323736333735626432 Jan 14 01:21:59.689000 audit: BPF prog-id=213 op=LOAD Jan 14 01:21:59.689000 audit[4360]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4350 pid=4360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:59.689000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631326538336637663463613835396239633338323736333735626432 Jan 14 01:21:59.689000 audit: BPF prog-id=214 op=LOAD Jan 14 01:21:59.689000 audit[4360]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4350 pid=4360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:59.689000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631326538336637663463613835396239633338323736333735626432 Jan 14 01:21:59.690000 audit: BPF prog-id=214 op=UNLOAD Jan 14 01:21:59.690000 audit[4360]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4350 pid=4360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:59.690000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631326538336637663463613835396239633338323736333735626432 Jan 14 01:21:59.690000 audit: BPF prog-id=213 op=UNLOAD Jan 14 01:21:59.690000 audit[4360]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4350 pid=4360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:59.690000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631326538336637663463613835396239633338323736333735626432 Jan 14 01:21:59.690000 audit: BPF prog-id=215 op=LOAD Jan 14 01:21:59.690000 audit[4360]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4350 pid=4360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:59.690000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631326538336637663463613835396239633338323736333735626432 Jan 14 01:21:59.692423 systemd-resolved[1295]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 01:21:59.707783 systemd-networkd[1516]: vxlan.calico: Gained IPv6LL Jan 14 01:21:59.744192 containerd[1611]: time="2026-01-14T01:21:59.744076251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t5t8t,Uid:b5d1e217-40e3-4ad0-82fb-7639214c6e0d,Namespace:kube-system,Attempt:0,} returns sandbox id \"f12e83f7f4ca859b9c38276375bd21da4d390b4e1cb503609ad718f32f9ba8b0\"" Jan 14 01:21:59.745783 kubelet[2787]: E0114 01:21:59.745737 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:21:59.751124 containerd[1611]: time="2026-01-14T01:21:59.751090613Z" level=info msg="CreateContainer within sandbox \"f12e83f7f4ca859b9c38276375bd21da4d390b4e1cb503609ad718f32f9ba8b0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 01:21:59.766748 containerd[1611]: time="2026-01-14T01:21:59.766604169Z" level=info msg="Container 0b4fbfe0aa58eeab22ae7ab634df05f380a7e12f200fa499d0a19da40e836702: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:21:59.771454 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1908883356.mount: Deactivated successfully. Jan 14 01:21:59.776413 containerd[1611]: time="2026-01-14T01:21:59.776302141Z" level=info msg="CreateContainer within sandbox \"f12e83f7f4ca859b9c38276375bd21da4d390b4e1cb503609ad718f32f9ba8b0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0b4fbfe0aa58eeab22ae7ab634df05f380a7e12f200fa499d0a19da40e836702\"" Jan 14 01:21:59.777271 containerd[1611]: time="2026-01-14T01:21:59.776983062Z" level=info msg="StartContainer for \"0b4fbfe0aa58eeab22ae7ab634df05f380a7e12f200fa499d0a19da40e836702\"" Jan 14 01:21:59.778179 containerd[1611]: time="2026-01-14T01:21:59.778002056Z" level=info msg="connecting to shim 0b4fbfe0aa58eeab22ae7ab634df05f380a7e12f200fa499d0a19da40e836702" address="unix:///run/containerd/s/e2d2ac6d86b2cc3401e99529caf773faf475b94dc72638663ce2f9846fba46c4" protocol=ttrpc version=3 Jan 14 01:21:59.815037 systemd[1]: Started cri-containerd-0b4fbfe0aa58eeab22ae7ab634df05f380a7e12f200fa499d0a19da40e836702.scope - libcontainer container 0b4fbfe0aa58eeab22ae7ab634df05f380a7e12f200fa499d0a19da40e836702. Jan 14 01:21:59.835000 audit: BPF prog-id=216 op=LOAD Jan 14 01:21:59.836000 audit: BPF prog-id=217 op=LOAD Jan 14 01:21:59.836000 audit[4387]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4350 pid=4387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:59.836000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062346662666530616135386565616232326165376162363334646630 Jan 14 01:21:59.836000 audit: BPF prog-id=217 op=UNLOAD Jan 14 01:21:59.836000 audit[4387]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4350 pid=4387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:59.836000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062346662666530616135386565616232326165376162363334646630 Jan 14 01:21:59.836000 audit: BPF prog-id=218 op=LOAD Jan 14 01:21:59.836000 audit[4387]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4350 pid=4387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:59.836000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062346662666530616135386565616232326165376162363334646630 Jan 14 01:21:59.836000 audit: BPF prog-id=219 op=LOAD Jan 14 01:21:59.836000 audit[4387]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4350 pid=4387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:59.836000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062346662666530616135386565616232326165376162363334646630 Jan 14 01:21:59.837000 audit: BPF prog-id=219 op=UNLOAD Jan 14 01:21:59.837000 audit[4387]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4350 pid=4387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:59.837000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062346662666530616135386565616232326165376162363334646630 Jan 14 01:21:59.837000 audit: BPF prog-id=218 op=UNLOAD Jan 14 01:21:59.837000 audit[4387]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4350 pid=4387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:59.837000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062346662666530616135386565616232326165376162363334646630 Jan 14 01:21:59.837000 audit: BPF prog-id=220 op=LOAD Jan 14 01:21:59.837000 audit[4387]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4350 pid=4387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:59.837000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062346662666530616135386565616232326165376162363334646630 Jan 14 01:21:59.865229 containerd[1611]: time="2026-01-14T01:21:59.865102351Z" level=info msg="StartContainer for \"0b4fbfe0aa58eeab22ae7ab634df05f380a7e12f200fa499d0a19da40e836702\" returns successfully" Jan 14 01:22:00.659459 kubelet[2787]: E0114 01:22:00.659096 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:22:00.674162 kubelet[2787]: I0114 01:22:00.674090 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-t5t8t" podStartSLOduration=38.674068903 podStartE2EDuration="38.674068903s" podCreationTimestamp="2026-01-14 01:21:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:22:00.673497976 +0000 UTC m=+44.451065041" watchObservedRunningTime="2026-01-14 01:22:00.674068903 +0000 UTC m=+44.451635958" Jan 14 01:22:00.694000 audit[4424]: NETFILTER_CFG table=filter:126 family=2 entries=20 op=nft_register_rule pid=4424 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:22:00.694000 audit[4424]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdb381abf0 a2=0 a3=7ffdb381abdc items=0 ppid=2947 pid=4424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:00.694000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:22:00.705000 audit[4424]: NETFILTER_CFG table=nat:127 family=2 entries=14 op=nft_register_rule pid=4424 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:22:00.705000 audit[4424]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdb381abf0 a2=0 a3=0 items=0 ppid=2947 pid=4424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:00.705000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:22:00.736000 audit[4426]: NETFILTER_CFG table=filter:128 family=2 entries=17 op=nft_register_rule pid=4426 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:22:00.736000 audit[4426]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe79955680 a2=0 a3=7ffe7995566c items=0 ppid=2947 pid=4426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:00.736000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:22:00.747000 audit[4426]: NETFILTER_CFG table=nat:129 family=2 entries=35 op=nft_register_chain pid=4426 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:22:00.747000 audit[4426]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffe79955680 a2=0 a3=7ffe7995566c items=0 ppid=2947 pid=4426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:00.747000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:22:01.393688 containerd[1611]: time="2026-01-14T01:22:01.393597070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-x2sz9,Uid:09905137-6883-4a25-b76e-d0608b4b6347,Namespace:calico-system,Attempt:0,}" Jan 14 01:22:01.394156 containerd[1611]: time="2026-01-14T01:22:01.393600105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c99cb9d5d-jz6rb,Uid:1bd888e4-98c6-46dd-883e-12946740dfe2,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:22:01.394156 containerd[1611]: time="2026-01-14T01:22:01.393648225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59555f9565-zxzlc,Uid:7724ac30-d973-433e-90c7-10adfa17a249,Namespace:calico-system,Attempt:0,}" Jan 14 01:22:01.563942 systemd-networkd[1516]: calic3f31242b45: Gained IPv6LL Jan 14 01:22:01.612433 systemd-networkd[1516]: calic51c4e419c4: Link UP Jan 14 01:22:01.613405 systemd-networkd[1516]: calic51c4e419c4: Gained carrier Jan 14 01:22:01.637620 containerd[1611]: 2026-01-14 01:22:01.480 [INFO][4428] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--x2sz9-eth0 goldmane-666569f655- calico-system 09905137-6883-4a25-b76e-d0608b4b6347 844 0 2026-01-14 01:21:36 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-x2sz9 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic51c4e419c4 [] [] }} ContainerID="68259b3e053da11a658d697bf473581a9005932a2e393fc9f3263fb4726a25f0" Namespace="calico-system" Pod="goldmane-666569f655-x2sz9" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--x2sz9-" Jan 14 01:22:01.637620 containerd[1611]: 2026-01-14 01:22:01.481 [INFO][4428] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="68259b3e053da11a658d697bf473581a9005932a2e393fc9f3263fb4726a25f0" Namespace="calico-system" Pod="goldmane-666569f655-x2sz9" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--x2sz9-eth0" Jan 14 01:22:01.637620 containerd[1611]: 2026-01-14 01:22:01.531 [INFO][4467] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="68259b3e053da11a658d697bf473581a9005932a2e393fc9f3263fb4726a25f0" HandleID="k8s-pod-network.68259b3e053da11a658d697bf473581a9005932a2e393fc9f3263fb4726a25f0" Workload="localhost-k8s-goldmane--666569f655--x2sz9-eth0" Jan 14 01:22:01.637620 containerd[1611]: 2026-01-14 01:22:01.531 [INFO][4467] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="68259b3e053da11a658d697bf473581a9005932a2e393fc9f3263fb4726a25f0" HandleID="k8s-pod-network.68259b3e053da11a658d697bf473581a9005932a2e393fc9f3263fb4726a25f0" Workload="localhost-k8s-goldmane--666569f655--x2sz9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7640), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-x2sz9", "timestamp":"2026-01-14 01:22:01.531419881 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:22:01.637620 containerd[1611]: 2026-01-14 01:22:01.531 [INFO][4467] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:22:01.637620 containerd[1611]: 2026-01-14 01:22:01.531 [INFO][4467] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:22:01.637620 containerd[1611]: 2026-01-14 01:22:01.531 [INFO][4467] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 01:22:01.637620 containerd[1611]: 2026-01-14 01:22:01.542 [INFO][4467] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.68259b3e053da11a658d697bf473581a9005932a2e393fc9f3263fb4726a25f0" host="localhost" Jan 14 01:22:01.637620 containerd[1611]: 2026-01-14 01:22:01.551 [INFO][4467] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 01:22:01.637620 containerd[1611]: 2026-01-14 01:22:01.567 [INFO][4467] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 01:22:01.637620 containerd[1611]: 2026-01-14 01:22:01.570 [INFO][4467] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 01:22:01.637620 containerd[1611]: 2026-01-14 01:22:01.573 [INFO][4467] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 01:22:01.637620 containerd[1611]: 2026-01-14 01:22:01.573 [INFO][4467] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.68259b3e053da11a658d697bf473581a9005932a2e393fc9f3263fb4726a25f0" host="localhost" Jan 14 01:22:01.637620 containerd[1611]: 2026-01-14 01:22:01.577 [INFO][4467] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.68259b3e053da11a658d697bf473581a9005932a2e393fc9f3263fb4726a25f0 Jan 14 01:22:01.637620 containerd[1611]: 2026-01-14 01:22:01.583 [INFO][4467] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.68259b3e053da11a658d697bf473581a9005932a2e393fc9f3263fb4726a25f0" host="localhost" Jan 14 01:22:01.637620 containerd[1611]: 2026-01-14 01:22:01.594 [INFO][4467] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.68259b3e053da11a658d697bf473581a9005932a2e393fc9f3263fb4726a25f0" host="localhost" Jan 14 01:22:01.637620 containerd[1611]: 2026-01-14 01:22:01.595 [INFO][4467] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.68259b3e053da11a658d697bf473581a9005932a2e393fc9f3263fb4726a25f0" host="localhost" Jan 14 01:22:01.637620 containerd[1611]: 2026-01-14 01:22:01.595 [INFO][4467] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:22:01.637620 containerd[1611]: 2026-01-14 01:22:01.595 [INFO][4467] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="68259b3e053da11a658d697bf473581a9005932a2e393fc9f3263fb4726a25f0" HandleID="k8s-pod-network.68259b3e053da11a658d697bf473581a9005932a2e393fc9f3263fb4726a25f0" Workload="localhost-k8s-goldmane--666569f655--x2sz9-eth0" Jan 14 01:22:01.638978 containerd[1611]: 2026-01-14 01:22:01.599 [INFO][4428] cni-plugin/k8s.go 418: Populated endpoint ContainerID="68259b3e053da11a658d697bf473581a9005932a2e393fc9f3263fb4726a25f0" Namespace="calico-system" Pod="goldmane-666569f655-x2sz9" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--x2sz9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--x2sz9-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"09905137-6883-4a25-b76e-d0608b4b6347", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 21, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-x2sz9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic51c4e419c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:22:01.638978 containerd[1611]: 2026-01-14 01:22:01.599 [INFO][4428] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="68259b3e053da11a658d697bf473581a9005932a2e393fc9f3263fb4726a25f0" Namespace="calico-system" Pod="goldmane-666569f655-x2sz9" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--x2sz9-eth0" Jan 14 01:22:01.638978 containerd[1611]: 2026-01-14 01:22:01.599 [INFO][4428] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic51c4e419c4 ContainerID="68259b3e053da11a658d697bf473581a9005932a2e393fc9f3263fb4726a25f0" Namespace="calico-system" Pod="goldmane-666569f655-x2sz9" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--x2sz9-eth0" Jan 14 01:22:01.638978 containerd[1611]: 2026-01-14 01:22:01.614 [INFO][4428] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="68259b3e053da11a658d697bf473581a9005932a2e393fc9f3263fb4726a25f0" Namespace="calico-system" Pod="goldmane-666569f655-x2sz9" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--x2sz9-eth0" Jan 14 01:22:01.638978 containerd[1611]: 2026-01-14 01:22:01.616 [INFO][4428] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="68259b3e053da11a658d697bf473581a9005932a2e393fc9f3263fb4726a25f0" Namespace="calico-system" Pod="goldmane-666569f655-x2sz9" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--x2sz9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--x2sz9-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"09905137-6883-4a25-b76e-d0608b4b6347", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 21, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"68259b3e053da11a658d697bf473581a9005932a2e393fc9f3263fb4726a25f0", Pod:"goldmane-666569f655-x2sz9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic51c4e419c4", MAC:"aa:a9:2b:ba:fa:5e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:22:01.638978 containerd[1611]: 2026-01-14 01:22:01.632 [INFO][4428] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="68259b3e053da11a658d697bf473581a9005932a2e393fc9f3263fb4726a25f0" Namespace="calico-system" Pod="goldmane-666569f655-x2sz9" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--x2sz9-eth0" Jan 14 01:22:01.664354 kubelet[2787]: E0114 01:22:01.663487 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:22:01.667000 audit[4505]: NETFILTER_CFG table=filter:130 family=2 entries=54 op=nft_register_chain pid=4505 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:22:01.682039 kernel: kauditd_printk_skb: 290 callbacks suppressed Jan 14 01:22:01.682165 kernel: audit: type=1325 audit(1768353721.667:671): table=filter:130 family=2 entries=54 op=nft_register_chain pid=4505 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:22:01.667000 audit[4505]: SYSCALL arch=c000003e syscall=46 success=yes exit=29220 a0=3 a1=7ffc80406900 a2=0 a3=7ffc804068ec items=0 ppid=4021 pid=4505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:01.701657 kernel: audit: type=1300 audit(1768353721.667:671): arch=c000003e syscall=46 success=yes exit=29220 a0=3 a1=7ffc80406900 a2=0 a3=7ffc804068ec items=0 ppid=4021 pid=4505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:01.702057 containerd[1611]: time="2026-01-14T01:22:01.702012891Z" level=info msg="connecting to shim 68259b3e053da11a658d697bf473581a9005932a2e393fc9f3263fb4726a25f0" address="unix:///run/containerd/s/0d6e3dd4d57c509e746e731d68ac5d3aa0351688b512bf5462d852d315ed7d0e" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:22:01.667000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:22:01.715575 kernel: audit: type=1327 audit(1768353721.667:671): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:22:01.753616 systemd-networkd[1516]: cali917faad5fa8: Link UP Jan 14 01:22:01.756426 systemd-networkd[1516]: cali917faad5fa8: Gained carrier Jan 14 01:22:01.762984 systemd[1]: Started cri-containerd-68259b3e053da11a658d697bf473581a9005932a2e393fc9f3263fb4726a25f0.scope - libcontainer container 68259b3e053da11a658d697bf473581a9005932a2e393fc9f3263fb4726a25f0. Jan 14 01:22:01.792357 containerd[1611]: 2026-01-14 01:22:01.501 [INFO][4436] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6c99cb9d5d--jz6rb-eth0 calico-apiserver-6c99cb9d5d- calico-apiserver 1bd888e4-98c6-46dd-883e-12946740dfe2 846 0 2026-01-14 01:21:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c99cb9d5d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6c99cb9d5d-jz6rb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali917faad5fa8 [] [] }} ContainerID="aaa62e1e19cf93372555713cb5ce72d73e1443f8e30879cfce3a21c63d234d0d" Namespace="calico-apiserver" Pod="calico-apiserver-6c99cb9d5d-jz6rb" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c99cb9d5d--jz6rb-" Jan 14 01:22:01.792357 containerd[1611]: 2026-01-14 01:22:01.501 [INFO][4436] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aaa62e1e19cf93372555713cb5ce72d73e1443f8e30879cfce3a21c63d234d0d" Namespace="calico-apiserver" Pod="calico-apiserver-6c99cb9d5d-jz6rb" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c99cb9d5d--jz6rb-eth0" Jan 14 01:22:01.792357 containerd[1611]: 2026-01-14 01:22:01.543 [INFO][4477] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aaa62e1e19cf93372555713cb5ce72d73e1443f8e30879cfce3a21c63d234d0d" HandleID="k8s-pod-network.aaa62e1e19cf93372555713cb5ce72d73e1443f8e30879cfce3a21c63d234d0d" Workload="localhost-k8s-calico--apiserver--6c99cb9d5d--jz6rb-eth0" Jan 14 01:22:01.792357 containerd[1611]: 2026-01-14 01:22:01.544 [INFO][4477] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="aaa62e1e19cf93372555713cb5ce72d73e1443f8e30879cfce3a21c63d234d0d" HandleID="k8s-pod-network.aaa62e1e19cf93372555713cb5ce72d73e1443f8e30879cfce3a21c63d234d0d" Workload="localhost-k8s-calico--apiserver--6c99cb9d5d--jz6rb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00043c070), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6c99cb9d5d-jz6rb", "timestamp":"2026-01-14 01:22:01.543933096 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:22:01.792357 containerd[1611]: 2026-01-14 01:22:01.544 [INFO][4477] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:22:01.792357 containerd[1611]: 2026-01-14 01:22:01.595 [INFO][4477] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:22:01.792357 containerd[1611]: 2026-01-14 01:22:01.596 [INFO][4477] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 01:22:01.792357 containerd[1611]: 2026-01-14 01:22:01.646 [INFO][4477] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aaa62e1e19cf93372555713cb5ce72d73e1443f8e30879cfce3a21c63d234d0d" host="localhost" Jan 14 01:22:01.792357 containerd[1611]: 2026-01-14 01:22:01.660 [INFO][4477] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 01:22:01.792357 containerd[1611]: 2026-01-14 01:22:01.674 [INFO][4477] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 01:22:01.792357 containerd[1611]: 2026-01-14 01:22:01.681 [INFO][4477] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 01:22:01.792357 containerd[1611]: 2026-01-14 01:22:01.691 [INFO][4477] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 01:22:01.792357 containerd[1611]: 2026-01-14 01:22:01.692 [INFO][4477] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.aaa62e1e19cf93372555713cb5ce72d73e1443f8e30879cfce3a21c63d234d0d" host="localhost" Jan 14 01:22:01.792357 containerd[1611]: 2026-01-14 01:22:01.701 [INFO][4477] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.aaa62e1e19cf93372555713cb5ce72d73e1443f8e30879cfce3a21c63d234d0d Jan 14 01:22:01.792357 containerd[1611]: 2026-01-14 01:22:01.717 [INFO][4477] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.aaa62e1e19cf93372555713cb5ce72d73e1443f8e30879cfce3a21c63d234d0d" host="localhost" Jan 14 01:22:01.792357 containerd[1611]: 2026-01-14 01:22:01.728 [INFO][4477] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.aaa62e1e19cf93372555713cb5ce72d73e1443f8e30879cfce3a21c63d234d0d" host="localhost" Jan 14 01:22:01.792357 containerd[1611]: 2026-01-14 01:22:01.728 [INFO][4477] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.aaa62e1e19cf93372555713cb5ce72d73e1443f8e30879cfce3a21c63d234d0d" host="localhost" Jan 14 01:22:01.792357 containerd[1611]: 2026-01-14 01:22:01.730 [INFO][4477] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:22:01.792357 containerd[1611]: 2026-01-14 01:22:01.730 [INFO][4477] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="aaa62e1e19cf93372555713cb5ce72d73e1443f8e30879cfce3a21c63d234d0d" HandleID="k8s-pod-network.aaa62e1e19cf93372555713cb5ce72d73e1443f8e30879cfce3a21c63d234d0d" Workload="localhost-k8s-calico--apiserver--6c99cb9d5d--jz6rb-eth0" Jan 14 01:22:01.795919 containerd[1611]: 2026-01-14 01:22:01.745 [INFO][4436] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aaa62e1e19cf93372555713cb5ce72d73e1443f8e30879cfce3a21c63d234d0d" Namespace="calico-apiserver" Pod="calico-apiserver-6c99cb9d5d-jz6rb" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c99cb9d5d--jz6rb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c99cb9d5d--jz6rb-eth0", GenerateName:"calico-apiserver-6c99cb9d5d-", Namespace:"calico-apiserver", SelfLink:"", UID:"1bd888e4-98c6-46dd-883e-12946740dfe2", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 21, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c99cb9d5d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6c99cb9d5d-jz6rb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali917faad5fa8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:22:01.795919 containerd[1611]: 2026-01-14 01:22:01.745 [INFO][4436] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="aaa62e1e19cf93372555713cb5ce72d73e1443f8e30879cfce3a21c63d234d0d" Namespace="calico-apiserver" Pod="calico-apiserver-6c99cb9d5d-jz6rb" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c99cb9d5d--jz6rb-eth0" Jan 14 01:22:01.795919 containerd[1611]: 2026-01-14 01:22:01.745 [INFO][4436] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali917faad5fa8 ContainerID="aaa62e1e19cf93372555713cb5ce72d73e1443f8e30879cfce3a21c63d234d0d" Namespace="calico-apiserver" Pod="calico-apiserver-6c99cb9d5d-jz6rb" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c99cb9d5d--jz6rb-eth0" Jan 14 01:22:01.795919 containerd[1611]: 2026-01-14 01:22:01.761 [INFO][4436] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aaa62e1e19cf93372555713cb5ce72d73e1443f8e30879cfce3a21c63d234d0d" Namespace="calico-apiserver" Pod="calico-apiserver-6c99cb9d5d-jz6rb" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c99cb9d5d--jz6rb-eth0" Jan 14 01:22:01.795919 containerd[1611]: 2026-01-14 01:22:01.766 [INFO][4436] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aaa62e1e19cf93372555713cb5ce72d73e1443f8e30879cfce3a21c63d234d0d" Namespace="calico-apiserver" Pod="calico-apiserver-6c99cb9d5d-jz6rb" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c99cb9d5d--jz6rb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c99cb9d5d--jz6rb-eth0", GenerateName:"calico-apiserver-6c99cb9d5d-", Namespace:"calico-apiserver", SelfLink:"", UID:"1bd888e4-98c6-46dd-883e-12946740dfe2", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 21, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c99cb9d5d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aaa62e1e19cf93372555713cb5ce72d73e1443f8e30879cfce3a21c63d234d0d", Pod:"calico-apiserver-6c99cb9d5d-jz6rb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali917faad5fa8", MAC:"ce:08:3d:c4:e0:f3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:22:01.795919 containerd[1611]: 2026-01-14 01:22:01.779 [INFO][4436] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aaa62e1e19cf93372555713cb5ce72d73e1443f8e30879cfce3a21c63d234d0d" Namespace="calico-apiserver" Pod="calico-apiserver-6c99cb9d5d-jz6rb" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c99cb9d5d--jz6rb-eth0" Jan 14 01:22:01.810000 audit: BPF prog-id=221 op=LOAD Jan 14 01:22:01.812000 audit: BPF prog-id=222 op=LOAD Jan 14 01:22:01.820330 kernel: audit: type=1334 audit(1768353721.810:672): prog-id=221 op=LOAD Jan 14 01:22:01.820390 kernel: audit: type=1334 audit(1768353721.812:673): prog-id=222 op=LOAD Jan 14 01:22:01.820420 kernel: audit: type=1300 audit(1768353721.812:673): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4514 pid=4526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:01.812000 audit[4526]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4514 pid=4526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:01.837689 kernel: audit: type=1327 audit(1768353721.812:673): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638323539623365303533646131316136353864363937626634373335 Jan 14 01:22:01.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638323539623365303533646131316136353864363937626634373335 Jan 14 01:22:01.820871 systemd-resolved[1295]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 01:22:01.856545 containerd[1611]: time="2026-01-14T01:22:01.856429148Z" level=info msg="connecting to shim aaa62e1e19cf93372555713cb5ce72d73e1443f8e30879cfce3a21c63d234d0d" address="unix:///run/containerd/s/7e909ec8618b2a40ad4f7fa39b55d4b3a7f427ed96a9044d237d343f5aaf48c8" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:22:01.812000 audit: BPF prog-id=222 op=UNLOAD Jan 14 01:22:01.866605 kernel: audit: type=1334 audit(1768353721.812:674): prog-id=222 op=UNLOAD Jan 14 01:22:01.812000 audit[4526]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4514 pid=4526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:01.878977 systemd-networkd[1516]: calie7aa5a6abd4: Link UP Jan 14 01:22:01.886304 kernel: audit: type=1300 audit(1768353721.812:674): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4514 pid=4526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:01.883736 systemd-networkd[1516]: calie7aa5a6abd4: Gained carrier Jan 14 01:22:01.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638323539623365303533646131316136353864363937626634373335 Jan 14 01:22:01.900690 kernel: audit: type=1327 audit(1768353721.812:674): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638323539623365303533646131316136353864363937626634373335 Jan 14 01:22:01.812000 audit: BPF prog-id=223 op=LOAD Jan 14 01:22:01.812000 audit[4526]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4514 pid=4526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:01.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638323539623365303533646131316136353864363937626634373335 Jan 14 01:22:01.812000 audit: BPF prog-id=224 op=LOAD Jan 14 01:22:01.812000 audit[4526]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4514 pid=4526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:01.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638323539623365303533646131316136353864363937626634373335 Jan 14 01:22:01.812000 audit: BPF prog-id=224 op=UNLOAD Jan 14 01:22:01.812000 audit[4526]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4514 pid=4526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:01.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638323539623365303533646131316136353864363937626634373335 Jan 14 01:22:01.812000 audit: BPF prog-id=223 op=UNLOAD Jan 14 01:22:01.812000 audit[4526]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4514 pid=4526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:01.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638323539623365303533646131316136353864363937626634373335 Jan 14 01:22:01.812000 audit: BPF prog-id=225 op=LOAD Jan 14 01:22:01.812000 audit[4526]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4514 pid=4526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:01.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638323539623365303533646131316136353864363937626634373335 Jan 14 01:22:01.859000 audit[4553]: NETFILTER_CFG table=filter:131 family=2 entries=54 op=nft_register_chain pid=4553 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:22:01.859000 audit[4553]: SYSCALL arch=c000003e syscall=46 success=yes exit=29380 a0=3 a1=7ffdf03d9bf0 a2=0 a3=7ffdf03d9bdc items=0 ppid=4021 pid=4553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:01.859000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:22:01.911916 containerd[1611]: 2026-01-14 01:22:01.509 [INFO][4448] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--59555f9565--zxzlc-eth0 calico-kube-controllers-59555f9565- calico-system 7724ac30-d973-433e-90c7-10adfa17a249 840 0 2026-01-14 01:21:39 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:59555f9565 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-59555f9565-zxzlc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie7aa5a6abd4 [] [] }} ContainerID="b0c30d8aef8eb84b9953d71e1bf098f70b3d3cc89546c15cb6fcce2cfbe6f41f" Namespace="calico-system" Pod="calico-kube-controllers-59555f9565-zxzlc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59555f9565--zxzlc-" Jan 14 01:22:01.911916 containerd[1611]: 2026-01-14 01:22:01.510 [INFO][4448] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b0c30d8aef8eb84b9953d71e1bf098f70b3d3cc89546c15cb6fcce2cfbe6f41f" Namespace="calico-system" Pod="calico-kube-controllers-59555f9565-zxzlc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59555f9565--zxzlc-eth0" Jan 14 01:22:01.911916 containerd[1611]: 2026-01-14 01:22:01.579 [INFO][4483] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b0c30d8aef8eb84b9953d71e1bf098f70b3d3cc89546c15cb6fcce2cfbe6f41f" HandleID="k8s-pod-network.b0c30d8aef8eb84b9953d71e1bf098f70b3d3cc89546c15cb6fcce2cfbe6f41f" Workload="localhost-k8s-calico--kube--controllers--59555f9565--zxzlc-eth0" Jan 14 01:22:01.911916 containerd[1611]: 2026-01-14 01:22:01.580 [INFO][4483] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b0c30d8aef8eb84b9953d71e1bf098f70b3d3cc89546c15cb6fcce2cfbe6f41f" HandleID="k8s-pod-network.b0c30d8aef8eb84b9953d71e1bf098f70b3d3cc89546c15cb6fcce2cfbe6f41f" Workload="localhost-k8s-calico--kube--controllers--59555f9565--zxzlc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ea90), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-59555f9565-zxzlc", "timestamp":"2026-01-14 01:22:01.579969539 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:22:01.911916 containerd[1611]: 2026-01-14 01:22:01.580 [INFO][4483] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:22:01.911916 containerd[1611]: 2026-01-14 01:22:01.730 [INFO][4483] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:22:01.911916 containerd[1611]: 2026-01-14 01:22:01.730 [INFO][4483] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 01:22:01.911916 containerd[1611]: 2026-01-14 01:22:01.748 [INFO][4483] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b0c30d8aef8eb84b9953d71e1bf098f70b3d3cc89546c15cb6fcce2cfbe6f41f" host="localhost" Jan 14 01:22:01.911916 containerd[1611]: 2026-01-14 01:22:01.761 [INFO][4483] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 01:22:01.911916 containerd[1611]: 2026-01-14 01:22:01.777 [INFO][4483] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 01:22:01.911916 containerd[1611]: 2026-01-14 01:22:01.782 [INFO][4483] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 01:22:01.911916 containerd[1611]: 2026-01-14 01:22:01.788 [INFO][4483] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 01:22:01.911916 containerd[1611]: 2026-01-14 01:22:01.791 [INFO][4483] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b0c30d8aef8eb84b9953d71e1bf098f70b3d3cc89546c15cb6fcce2cfbe6f41f" host="localhost" Jan 14 01:22:01.911916 containerd[1611]: 2026-01-14 01:22:01.796 [INFO][4483] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b0c30d8aef8eb84b9953d71e1bf098f70b3d3cc89546c15cb6fcce2cfbe6f41f Jan 14 01:22:01.911916 containerd[1611]: 2026-01-14 01:22:01.804 [INFO][4483] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b0c30d8aef8eb84b9953d71e1bf098f70b3d3cc89546c15cb6fcce2cfbe6f41f" host="localhost" Jan 14 01:22:01.911916 containerd[1611]: 2026-01-14 01:22:01.838 [INFO][4483] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.b0c30d8aef8eb84b9953d71e1bf098f70b3d3cc89546c15cb6fcce2cfbe6f41f" host="localhost" Jan 14 01:22:01.911916 containerd[1611]: 2026-01-14 01:22:01.839 [INFO][4483] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.b0c30d8aef8eb84b9953d71e1bf098f70b3d3cc89546c15cb6fcce2cfbe6f41f" host="localhost" Jan 14 01:22:01.911916 containerd[1611]: 2026-01-14 01:22:01.839 [INFO][4483] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:22:01.911916 containerd[1611]: 2026-01-14 01:22:01.839 [INFO][4483] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="b0c30d8aef8eb84b9953d71e1bf098f70b3d3cc89546c15cb6fcce2cfbe6f41f" HandleID="k8s-pod-network.b0c30d8aef8eb84b9953d71e1bf098f70b3d3cc89546c15cb6fcce2cfbe6f41f" Workload="localhost-k8s-calico--kube--controllers--59555f9565--zxzlc-eth0" Jan 14 01:22:01.912699 containerd[1611]: 2026-01-14 01:22:01.851 [INFO][4448] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b0c30d8aef8eb84b9953d71e1bf098f70b3d3cc89546c15cb6fcce2cfbe6f41f" Namespace="calico-system" Pod="calico-kube-controllers-59555f9565-zxzlc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59555f9565--zxzlc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--59555f9565--zxzlc-eth0", GenerateName:"calico-kube-controllers-59555f9565-", Namespace:"calico-system", SelfLink:"", UID:"7724ac30-d973-433e-90c7-10adfa17a249", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 21, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59555f9565", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-59555f9565-zxzlc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie7aa5a6abd4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:22:01.912699 containerd[1611]: 2026-01-14 01:22:01.854 [INFO][4448] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="b0c30d8aef8eb84b9953d71e1bf098f70b3d3cc89546c15cb6fcce2cfbe6f41f" Namespace="calico-system" Pod="calico-kube-controllers-59555f9565-zxzlc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59555f9565--zxzlc-eth0" Jan 14 01:22:01.912699 containerd[1611]: 2026-01-14 01:22:01.854 [INFO][4448] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie7aa5a6abd4 ContainerID="b0c30d8aef8eb84b9953d71e1bf098f70b3d3cc89546c15cb6fcce2cfbe6f41f" Namespace="calico-system" Pod="calico-kube-controllers-59555f9565-zxzlc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59555f9565--zxzlc-eth0" Jan 14 01:22:01.912699 containerd[1611]: 2026-01-14 01:22:01.882 [INFO][4448] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b0c30d8aef8eb84b9953d71e1bf098f70b3d3cc89546c15cb6fcce2cfbe6f41f" Namespace="calico-system" Pod="calico-kube-controllers-59555f9565-zxzlc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59555f9565--zxzlc-eth0" Jan 14 01:22:01.912699 containerd[1611]: 2026-01-14 01:22:01.883 [INFO][4448] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b0c30d8aef8eb84b9953d71e1bf098f70b3d3cc89546c15cb6fcce2cfbe6f41f" Namespace="calico-system" Pod="calico-kube-controllers-59555f9565-zxzlc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59555f9565--zxzlc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--59555f9565--zxzlc-eth0", GenerateName:"calico-kube-controllers-59555f9565-", Namespace:"calico-system", SelfLink:"", UID:"7724ac30-d973-433e-90c7-10adfa17a249", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 21, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59555f9565", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b0c30d8aef8eb84b9953d71e1bf098f70b3d3cc89546c15cb6fcce2cfbe6f41f", Pod:"calico-kube-controllers-59555f9565-zxzlc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie7aa5a6abd4", MAC:"16:d8:eb:b0:44:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:22:01.912699 containerd[1611]: 2026-01-14 01:22:01.902 [INFO][4448] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b0c30d8aef8eb84b9953d71e1bf098f70b3d3cc89546c15cb6fcce2cfbe6f41f" Namespace="calico-system" Pod="calico-kube-controllers-59555f9565-zxzlc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59555f9565--zxzlc-eth0" Jan 14 01:22:01.936054 systemd[1]: Started cri-containerd-aaa62e1e19cf93372555713cb5ce72d73e1443f8e30879cfce3a21c63d234d0d.scope - libcontainer container aaa62e1e19cf93372555713cb5ce72d73e1443f8e30879cfce3a21c63d234d0d. Jan 14 01:22:01.944000 audit[4603]: NETFILTER_CFG table=filter:132 family=2 entries=44 op=nft_register_chain pid=4603 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:22:01.944000 audit[4603]: SYSCALL arch=c000003e syscall=46 success=yes exit=21936 a0=3 a1=7ffd6a9f0290 a2=0 a3=7ffd6a9f027c items=0 ppid=4021 pid=4603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:01.944000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:22:01.946909 containerd[1611]: time="2026-01-14T01:22:01.946788852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-x2sz9,Uid:09905137-6883-4a25-b76e-d0608b4b6347,Namespace:calico-system,Attempt:0,} returns sandbox id \"68259b3e053da11a658d697bf473581a9005932a2e393fc9f3263fb4726a25f0\"" Jan 14 01:22:01.951494 containerd[1611]: time="2026-01-14T01:22:01.951350379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:22:01.971000 audit: BPF prog-id=226 op=LOAD Jan 14 01:22:01.972000 audit: BPF prog-id=227 op=LOAD Jan 14 01:22:01.972000 audit[4579]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4563 pid=4579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:01.972000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161613632653165313963663933333732353535373133636235636537 Jan 14 01:22:01.972000 audit: BPF prog-id=227 op=UNLOAD Jan 14 01:22:01.972000 audit[4579]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4563 pid=4579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:01.972000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161613632653165313963663933333732353535373133636235636537 Jan 14 01:22:01.972000 audit: BPF prog-id=228 op=LOAD Jan 14 01:22:01.972000 audit[4579]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4563 pid=4579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:01.972000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161613632653165313963663933333732353535373133636235636537 Jan 14 01:22:01.972000 audit: BPF prog-id=229 op=LOAD Jan 14 01:22:01.972000 audit[4579]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4563 pid=4579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:01.972000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161613632653165313963663933333732353535373133636235636537 Jan 14 01:22:01.972000 audit: BPF prog-id=229 op=UNLOAD Jan 14 01:22:01.972000 audit[4579]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4563 pid=4579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:01.972000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161613632653165313963663933333732353535373133636235636537 Jan 14 01:22:01.972000 audit: BPF prog-id=228 op=UNLOAD Jan 14 01:22:01.972000 audit[4579]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4563 pid=4579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:01.972000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161613632653165313963663933333732353535373133636235636537 Jan 14 01:22:01.972000 audit: BPF prog-id=230 op=LOAD Jan 14 01:22:01.972000 audit[4579]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4563 pid=4579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:01.972000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161613632653165313963663933333732353535373133636235636537 Jan 14 01:22:01.974246 systemd-resolved[1295]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 01:22:01.987160 containerd[1611]: time="2026-01-14T01:22:01.987033130Z" level=info msg="connecting to shim b0c30d8aef8eb84b9953d71e1bf098f70b3d3cc89546c15cb6fcce2cfbe6f41f" address="unix:///run/containerd/s/07e6edb39a527d61e65edba8c7459a3a55e41ac97ed25a1aeee83008ca00d52c" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:22:02.025812 systemd[1]: Started cri-containerd-b0c30d8aef8eb84b9953d71e1bf098f70b3d3cc89546c15cb6fcce2cfbe6f41f.scope - libcontainer container b0c30d8aef8eb84b9953d71e1bf098f70b3d3cc89546c15cb6fcce2cfbe6f41f. Jan 14 01:22:02.029271 containerd[1611]: time="2026-01-14T01:22:02.029217707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c99cb9d5d-jz6rb,Uid:1bd888e4-98c6-46dd-883e-12946740dfe2,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"aaa62e1e19cf93372555713cb5ce72d73e1443f8e30879cfce3a21c63d234d0d\"" Jan 14 01:22:02.041319 containerd[1611]: time="2026-01-14T01:22:02.041240661Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:22:02.042446 containerd[1611]: time="2026-01-14T01:22:02.042375097Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:22:02.042571 containerd[1611]: time="2026-01-14T01:22:02.042462287Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:22:02.043064 kubelet[2787]: E0114 01:22:02.042672 2787 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:22:02.043064 kubelet[2787]: E0114 01:22:02.042710 2787 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:22:02.043064 kubelet[2787]: E0114 01:22:02.042939 2787 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xm9wn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-x2sz9_calico-system(09905137-6883-4a25-b76e-d0608b4b6347): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:22:02.043354 containerd[1611]: time="2026-01-14T01:22:02.043225423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:22:02.044356 kubelet[2787]: E0114 01:22:02.044300 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x2sz9" podUID="09905137-6883-4a25-b76e-d0608b4b6347" Jan 14 01:22:02.048000 audit: BPF prog-id=231 op=LOAD Jan 14 01:22:02.049000 audit: BPF prog-id=232 op=LOAD Jan 14 01:22:02.049000 audit[4630]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4613 pid=4630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:02.049000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230633330643861656638656238346239393533643731653162663039 Jan 14 01:22:02.049000 audit: BPF prog-id=232 op=UNLOAD Jan 14 01:22:02.049000 audit[4630]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4613 pid=4630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:02.049000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230633330643861656638656238346239393533643731653162663039 Jan 14 01:22:02.050000 audit: BPF prog-id=233 op=LOAD Jan 14 01:22:02.050000 audit[4630]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4613 pid=4630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:02.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230633330643861656638656238346239393533643731653162663039 Jan 14 01:22:02.050000 audit: BPF prog-id=234 op=LOAD Jan 14 01:22:02.050000 audit[4630]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4613 pid=4630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:02.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230633330643861656638656238346239393533643731653162663039 Jan 14 01:22:02.050000 audit: BPF prog-id=234 op=UNLOAD Jan 14 01:22:02.050000 audit[4630]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4613 pid=4630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:02.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230633330643861656638656238346239393533643731653162663039 Jan 14 01:22:02.050000 audit: BPF prog-id=233 op=UNLOAD Jan 14 01:22:02.050000 audit[4630]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4613 pid=4630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:02.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230633330643861656638656238346239393533643731653162663039 Jan 14 01:22:02.050000 audit: BPF prog-id=235 op=LOAD Jan 14 01:22:02.050000 audit[4630]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4613 pid=4630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:02.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230633330643861656638656238346239393533643731653162663039 Jan 14 01:22:02.052619 systemd-resolved[1295]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 01:22:02.099671 containerd[1611]: time="2026-01-14T01:22:02.099632189Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:22:02.100774 containerd[1611]: time="2026-01-14T01:22:02.100735368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59555f9565-zxzlc,Uid:7724ac30-d973-433e-90c7-10adfa17a249,Namespace:calico-system,Attempt:0,} returns sandbox id \"b0c30d8aef8eb84b9953d71e1bf098f70b3d3cc89546c15cb6fcce2cfbe6f41f\"" Jan 14 01:22:02.101188 containerd[1611]: time="2026-01-14T01:22:02.101070476Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:22:02.101188 containerd[1611]: time="2026-01-14T01:22:02.101162042Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:22:02.101403 kubelet[2787]: E0114 01:22:02.101295 2787 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:22:02.101403 kubelet[2787]: E0114 01:22:02.101365 2787 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:22:02.102214 kubelet[2787]: E0114 01:22:02.101930 2787 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sbg5p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c99cb9d5d-jz6rb_calico-apiserver(1bd888e4-98c6-46dd-883e-12946740dfe2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:22:02.102383 containerd[1611]: time="2026-01-14T01:22:02.102231441Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:22:02.103747 kubelet[2787]: E0114 01:22:02.103643 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c99cb9d5d-jz6rb" podUID="1bd888e4-98c6-46dd-883e-12946740dfe2" Jan 14 01:22:02.162351 containerd[1611]: time="2026-01-14T01:22:02.162287570Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:22:02.163793 containerd[1611]: time="2026-01-14T01:22:02.163712011Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:22:02.163894 containerd[1611]: time="2026-01-14T01:22:02.163812177Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:22:02.164257 kubelet[2787]: E0114 01:22:02.164128 2787 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:22:02.164257 kubelet[2787]: E0114 01:22:02.164195 2787 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:22:02.164494 kubelet[2787]: E0114 01:22:02.164349 2787 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pgsfz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-59555f9565-zxzlc_calico-system(7724ac30-d973-433e-90c7-10adfa17a249): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:22:02.165706 kubelet[2787]: E0114 01:22:02.165658 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59555f9565-zxzlc" podUID="7724ac30-d973-433e-90c7-10adfa17a249" Jan 14 01:22:02.667971 kubelet[2787]: E0114 01:22:02.667764 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59555f9565-zxzlc" podUID="7724ac30-d973-433e-90c7-10adfa17a249" Jan 14 01:22:02.669661 kubelet[2787]: E0114 01:22:02.669231 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x2sz9" podUID="09905137-6883-4a25-b76e-d0608b4b6347" Jan 14 01:22:02.670386 kubelet[2787]: E0114 01:22:02.670147 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:22:02.671071 kubelet[2787]: E0114 01:22:02.670965 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c99cb9d5d-jz6rb" podUID="1bd888e4-98c6-46dd-883e-12946740dfe2" Jan 14 01:22:02.718000 audit[4662]: NETFILTER_CFG table=filter:133 family=2 entries=14 op=nft_register_rule pid=4662 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:22:02.718000 audit[4662]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff7d892ab0 a2=0 a3=7fff7d892a9c items=0 ppid=2947 pid=4662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:02.718000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:22:02.735000 audit[4662]: NETFILTER_CFG table=nat:134 family=2 entries=20 op=nft_register_rule pid=4662 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:22:02.735000 audit[4662]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff7d892ab0 a2=0 a3=7fff7d892a9c items=0 ppid=2947 pid=4662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:02.735000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:22:02.758000 audit[4664]: NETFILTER_CFG table=filter:135 family=2 entries=14 op=nft_register_rule pid=4664 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:22:02.758000 audit[4664]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd08a30380 a2=0 a3=7ffd08a3036c items=0 ppid=2947 pid=4664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:02.758000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:22:02.769000 audit[4664]: NETFILTER_CFG table=nat:136 family=2 entries=20 op=nft_register_rule pid=4664 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:22:02.769000 audit[4664]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd08a30380 a2=0 a3=7ffd08a3036c items=0 ppid=2947 pid=4664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:02.769000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:22:03.163836 systemd-networkd[1516]: calie7aa5a6abd4: Gained IPv6LL Jan 14 01:22:03.291843 systemd-networkd[1516]: cali917faad5fa8: Gained IPv6LL Jan 14 01:22:03.393856 kubelet[2787]: E0114 01:22:03.393676 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:22:03.394165 containerd[1611]: time="2026-01-14T01:22:03.394108675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c99cb9d5d-kj4q4,Uid:2e6b76b0-bbf3-4bda-8c0a-ac8224558858,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:22:03.394165 containerd[1611]: time="2026-01-14T01:22:03.394156187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qqxnp,Uid:ba3d93c2-390e-4ba5-bb19-4864194c73f7,Namespace:calico-system,Attempt:0,}" Jan 14 01:22:03.394632 containerd[1611]: time="2026-01-14T01:22:03.394109064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7w7jk,Uid:04319d9b-642a-43be-8c0b-1ecdc12ac533,Namespace:kube-system,Attempt:0,}" Jan 14 01:22:03.419773 systemd-networkd[1516]: calic51c4e419c4: Gained IPv6LL Jan 14 01:22:03.582745 systemd-networkd[1516]: cali560f6a27610: Link UP Jan 14 01:22:03.584432 systemd-networkd[1516]: cali560f6a27610: Gained carrier Jan 14 01:22:03.600010 containerd[1611]: 2026-01-14 01:22:03.459 [INFO][4665] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6c99cb9d5d--kj4q4-eth0 calico-apiserver-6c99cb9d5d- calico-apiserver 2e6b76b0-bbf3-4bda-8c0a-ac8224558858 845 0 2026-01-14 01:21:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c99cb9d5d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6c99cb9d5d-kj4q4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali560f6a27610 [] [] }} ContainerID="50b4fca4ea2facced9afea163cecaa231021d588e739b49d377d9d68cbce87a7" Namespace="calico-apiserver" Pod="calico-apiserver-6c99cb9d5d-kj4q4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c99cb9d5d--kj4q4-" Jan 14 01:22:03.600010 containerd[1611]: 2026-01-14 01:22:03.459 [INFO][4665] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="50b4fca4ea2facced9afea163cecaa231021d588e739b49d377d9d68cbce87a7" Namespace="calico-apiserver" Pod="calico-apiserver-6c99cb9d5d-kj4q4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c99cb9d5d--kj4q4-eth0" Jan 14 01:22:03.600010 containerd[1611]: 2026-01-14 01:22:03.521 [INFO][4710] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="50b4fca4ea2facced9afea163cecaa231021d588e739b49d377d9d68cbce87a7" HandleID="k8s-pod-network.50b4fca4ea2facced9afea163cecaa231021d588e739b49d377d9d68cbce87a7" Workload="localhost-k8s-calico--apiserver--6c99cb9d5d--kj4q4-eth0" Jan 14 01:22:03.600010 containerd[1611]: 2026-01-14 01:22:03.521 [INFO][4710] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="50b4fca4ea2facced9afea163cecaa231021d588e739b49d377d9d68cbce87a7" HandleID="k8s-pod-network.50b4fca4ea2facced9afea163cecaa231021d588e739b49d377d9d68cbce87a7" Workload="localhost-k8s-calico--apiserver--6c99cb9d5d--kj4q4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000422ba0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6c99cb9d5d-kj4q4", "timestamp":"2026-01-14 01:22:03.52116427 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:22:03.600010 containerd[1611]: 2026-01-14 01:22:03.521 [INFO][4710] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:22:03.600010 containerd[1611]: 2026-01-14 01:22:03.521 [INFO][4710] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:22:03.600010 containerd[1611]: 2026-01-14 01:22:03.521 [INFO][4710] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 01:22:03.600010 containerd[1611]: 2026-01-14 01:22:03.533 [INFO][4710] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.50b4fca4ea2facced9afea163cecaa231021d588e739b49d377d9d68cbce87a7" host="localhost" Jan 14 01:22:03.600010 containerd[1611]: 2026-01-14 01:22:03.541 [INFO][4710] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 01:22:03.600010 containerd[1611]: 2026-01-14 01:22:03.548 [INFO][4710] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 01:22:03.600010 containerd[1611]: 2026-01-14 01:22:03.552 [INFO][4710] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 01:22:03.600010 containerd[1611]: 2026-01-14 01:22:03.554 [INFO][4710] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 01:22:03.600010 containerd[1611]: 2026-01-14 01:22:03.555 [INFO][4710] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.50b4fca4ea2facced9afea163cecaa231021d588e739b49d377d9d68cbce87a7" host="localhost" Jan 14 01:22:03.600010 containerd[1611]: 2026-01-14 01:22:03.557 [INFO][4710] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.50b4fca4ea2facced9afea163cecaa231021d588e739b49d377d9d68cbce87a7 Jan 14 01:22:03.600010 containerd[1611]: 2026-01-14 01:22:03.564 [INFO][4710] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.50b4fca4ea2facced9afea163cecaa231021d588e739b49d377d9d68cbce87a7" host="localhost" Jan 14 01:22:03.600010 containerd[1611]: 2026-01-14 01:22:03.574 [INFO][4710] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.50b4fca4ea2facced9afea163cecaa231021d588e739b49d377d9d68cbce87a7" host="localhost" Jan 14 01:22:03.600010 containerd[1611]: 2026-01-14 01:22:03.574 [INFO][4710] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.50b4fca4ea2facced9afea163cecaa231021d588e739b49d377d9d68cbce87a7" host="localhost" Jan 14 01:22:03.600010 containerd[1611]: 2026-01-14 01:22:03.574 [INFO][4710] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:22:03.600010 containerd[1611]: 2026-01-14 01:22:03.574 [INFO][4710] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="50b4fca4ea2facced9afea163cecaa231021d588e739b49d377d9d68cbce87a7" HandleID="k8s-pod-network.50b4fca4ea2facced9afea163cecaa231021d588e739b49d377d9d68cbce87a7" Workload="localhost-k8s-calico--apiserver--6c99cb9d5d--kj4q4-eth0" Jan 14 01:22:03.600616 containerd[1611]: 2026-01-14 01:22:03.578 [INFO][4665] cni-plugin/k8s.go 418: Populated endpoint ContainerID="50b4fca4ea2facced9afea163cecaa231021d588e739b49d377d9d68cbce87a7" Namespace="calico-apiserver" Pod="calico-apiserver-6c99cb9d5d-kj4q4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c99cb9d5d--kj4q4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c99cb9d5d--kj4q4-eth0", GenerateName:"calico-apiserver-6c99cb9d5d-", Namespace:"calico-apiserver", SelfLink:"", UID:"2e6b76b0-bbf3-4bda-8c0a-ac8224558858", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 21, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c99cb9d5d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6c99cb9d5d-kj4q4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali560f6a27610", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:22:03.600616 containerd[1611]: 2026-01-14 01:22:03.578 [INFO][4665] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="50b4fca4ea2facced9afea163cecaa231021d588e739b49d377d9d68cbce87a7" Namespace="calico-apiserver" Pod="calico-apiserver-6c99cb9d5d-kj4q4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c99cb9d5d--kj4q4-eth0" Jan 14 01:22:03.600616 containerd[1611]: 2026-01-14 01:22:03.578 [INFO][4665] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali560f6a27610 ContainerID="50b4fca4ea2facced9afea163cecaa231021d588e739b49d377d9d68cbce87a7" Namespace="calico-apiserver" Pod="calico-apiserver-6c99cb9d5d-kj4q4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c99cb9d5d--kj4q4-eth0" Jan 14 01:22:03.600616 containerd[1611]: 2026-01-14 01:22:03.584 [INFO][4665] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="50b4fca4ea2facced9afea163cecaa231021d588e739b49d377d9d68cbce87a7" Namespace="calico-apiserver" Pod="calico-apiserver-6c99cb9d5d-kj4q4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c99cb9d5d--kj4q4-eth0" Jan 14 01:22:03.600616 containerd[1611]: 2026-01-14 01:22:03.584 [INFO][4665] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="50b4fca4ea2facced9afea163cecaa231021d588e739b49d377d9d68cbce87a7" Namespace="calico-apiserver" Pod="calico-apiserver-6c99cb9d5d-kj4q4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c99cb9d5d--kj4q4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c99cb9d5d--kj4q4-eth0", GenerateName:"calico-apiserver-6c99cb9d5d-", Namespace:"calico-apiserver", SelfLink:"", UID:"2e6b76b0-bbf3-4bda-8c0a-ac8224558858", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 21, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c99cb9d5d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"50b4fca4ea2facced9afea163cecaa231021d588e739b49d377d9d68cbce87a7", Pod:"calico-apiserver-6c99cb9d5d-kj4q4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali560f6a27610", MAC:"c6:8e:32:d5:2d:aa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:22:03.600616 containerd[1611]: 2026-01-14 01:22:03.596 [INFO][4665] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="50b4fca4ea2facced9afea163cecaa231021d588e739b49d377d9d68cbce87a7" Namespace="calico-apiserver" Pod="calico-apiserver-6c99cb9d5d-kj4q4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c99cb9d5d--kj4q4-eth0" Jan 14 01:22:03.627213 containerd[1611]: time="2026-01-14T01:22:03.627030552Z" level=info msg="connecting to shim 50b4fca4ea2facced9afea163cecaa231021d588e739b49d377d9d68cbce87a7" address="unix:///run/containerd/s/a5db862f9ef32be83d5e8a51973b4652e62c228766482ea2f2f4fefddfd04e84" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:22:03.648000 audit[4770]: NETFILTER_CFG table=filter:137 family=2 entries=49 op=nft_register_chain pid=4770 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:22:03.648000 audit[4770]: SYSCALL arch=c000003e syscall=46 success=yes exit=25436 a0=3 a1=7ffcec9d8ba0 a2=0 a3=7ffcec9d8b8c items=0 ppid=4021 pid=4770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:03.648000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:22:03.675026 systemd[1]: Started cri-containerd-50b4fca4ea2facced9afea163cecaa231021d588e739b49d377d9d68cbce87a7.scope - libcontainer container 50b4fca4ea2facced9afea163cecaa231021d588e739b49d377d9d68cbce87a7. Jan 14 01:22:03.683348 kubelet[2787]: E0114 01:22:03.683265 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x2sz9" podUID="09905137-6883-4a25-b76e-d0608b4b6347" Jan 14 01:22:03.684958 kubelet[2787]: E0114 01:22:03.682695 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c99cb9d5d-jz6rb" podUID="1bd888e4-98c6-46dd-883e-12946740dfe2" Jan 14 01:22:03.684958 kubelet[2787]: E0114 01:22:03.684093 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59555f9565-zxzlc" podUID="7724ac30-d973-433e-90c7-10adfa17a249" Jan 14 01:22:03.722000 audit: BPF prog-id=236 op=LOAD Jan 14 01:22:03.725000 audit: BPF prog-id=237 op=LOAD Jan 14 01:22:03.725000 audit[4768]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001ec238 a2=98 a3=0 items=0 ppid=4756 pid=4768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:03.725000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530623466636134656132666163636564396166656131363363656361 Jan 14 01:22:03.725000 audit: BPF prog-id=237 op=UNLOAD Jan 14 01:22:03.725000 audit[4768]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4756 pid=4768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:03.725000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530623466636134656132666163636564396166656131363363656361 Jan 14 01:22:03.726000 audit: BPF prog-id=238 op=LOAD Jan 14 01:22:03.726000 audit[4768]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001ec488 a2=98 a3=0 items=0 ppid=4756 pid=4768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:03.726000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530623466636134656132666163636564396166656131363363656361 Jan 14 01:22:03.726000 audit: BPF prog-id=239 op=LOAD Jan 14 01:22:03.726000 audit[4768]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001ec218 a2=98 a3=0 items=0 ppid=4756 pid=4768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:03.726000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530623466636134656132666163636564396166656131363363656361 Jan 14 01:22:03.726000 audit: BPF prog-id=239 op=UNLOAD Jan 14 01:22:03.726000 audit[4768]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4756 pid=4768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:03.726000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530623466636134656132666163636564396166656131363363656361 Jan 14 01:22:03.726000 audit: BPF prog-id=238 op=UNLOAD Jan 14 01:22:03.726000 audit[4768]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4756 pid=4768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:03.726000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530623466636134656132666163636564396166656131363363656361 Jan 14 01:22:03.726000 audit: BPF prog-id=240 op=LOAD Jan 14 01:22:03.726000 audit[4768]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001ec6e8 a2=98 a3=0 items=0 ppid=4756 pid=4768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:03.726000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530623466636134656132666163636564396166656131363363656361 Jan 14 01:22:03.726957 systemd-networkd[1516]: cali5cbe1f72129: Link UP Jan 14 01:22:03.730233 systemd-networkd[1516]: cali5cbe1f72129: Gained carrier Jan 14 01:22:03.732877 systemd-resolved[1295]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 01:22:03.805688 containerd[1611]: 2026-01-14 01:22:03.470 [INFO][4678] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--7w7jk-eth0 coredns-674b8bbfcf- kube-system 04319d9b-642a-43be-8c0b-1ecdc12ac533 839 0 2026-01-14 01:21:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-7w7jk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5cbe1f72129 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="173d3c4c65ae5fd7b6d52fa7ab3ad1372a1d06560882f57c73526d340b345d3a" Namespace="kube-system" Pod="coredns-674b8bbfcf-7w7jk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7w7jk-" Jan 14 01:22:03.805688 containerd[1611]: 2026-01-14 01:22:03.471 [INFO][4678] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="173d3c4c65ae5fd7b6d52fa7ab3ad1372a1d06560882f57c73526d340b345d3a" Namespace="kube-system" Pod="coredns-674b8bbfcf-7w7jk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7w7jk-eth0" Jan 14 01:22:03.805688 containerd[1611]: 2026-01-14 01:22:03.534 [INFO][4718] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="173d3c4c65ae5fd7b6d52fa7ab3ad1372a1d06560882f57c73526d340b345d3a" HandleID="k8s-pod-network.173d3c4c65ae5fd7b6d52fa7ab3ad1372a1d06560882f57c73526d340b345d3a" Workload="localhost-k8s-coredns--674b8bbfcf--7w7jk-eth0" Jan 14 01:22:03.805688 containerd[1611]: 2026-01-14 01:22:03.534 [INFO][4718] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="173d3c4c65ae5fd7b6d52fa7ab3ad1372a1d06560882f57c73526d340b345d3a" HandleID="k8s-pod-network.173d3c4c65ae5fd7b6d52fa7ab3ad1372a1d06560882f57c73526d340b345d3a" Workload="localhost-k8s-coredns--674b8bbfcf--7w7jk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00040d5a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-7w7jk", "timestamp":"2026-01-14 01:22:03.534361532 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:22:03.805688 containerd[1611]: 2026-01-14 01:22:03.535 [INFO][4718] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:22:03.805688 containerd[1611]: 2026-01-14 01:22:03.574 [INFO][4718] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:22:03.805688 containerd[1611]: 2026-01-14 01:22:03.574 [INFO][4718] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 01:22:03.805688 containerd[1611]: 2026-01-14 01:22:03.641 [INFO][4718] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.173d3c4c65ae5fd7b6d52fa7ab3ad1372a1d06560882f57c73526d340b345d3a" host="localhost" Jan 14 01:22:03.805688 containerd[1611]: 2026-01-14 01:22:03.654 [INFO][4718] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 01:22:03.805688 containerd[1611]: 2026-01-14 01:22:03.664 [INFO][4718] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 01:22:03.805688 containerd[1611]: 2026-01-14 01:22:03.668 [INFO][4718] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 01:22:03.805688 containerd[1611]: 2026-01-14 01:22:03.672 [INFO][4718] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 01:22:03.805688 containerd[1611]: 2026-01-14 01:22:03.672 [INFO][4718] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.173d3c4c65ae5fd7b6d52fa7ab3ad1372a1d06560882f57c73526d340b345d3a" host="localhost" Jan 14 01:22:03.805688 containerd[1611]: 2026-01-14 01:22:03.675 [INFO][4718] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.173d3c4c65ae5fd7b6d52fa7ab3ad1372a1d06560882f57c73526d340b345d3a Jan 14 01:22:03.805688 containerd[1611]: 2026-01-14 01:22:03.687 [INFO][4718] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.173d3c4c65ae5fd7b6d52fa7ab3ad1372a1d06560882f57c73526d340b345d3a" host="localhost" Jan 14 01:22:03.805688 containerd[1611]: 2026-01-14 01:22:03.703 [INFO][4718] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.173d3c4c65ae5fd7b6d52fa7ab3ad1372a1d06560882f57c73526d340b345d3a" host="localhost" Jan 14 01:22:03.805688 containerd[1611]: 2026-01-14 01:22:03.706 [INFO][4718] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.173d3c4c65ae5fd7b6d52fa7ab3ad1372a1d06560882f57c73526d340b345d3a" host="localhost" Jan 14 01:22:03.805688 containerd[1611]: 2026-01-14 01:22:03.706 [INFO][4718] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:22:03.805688 containerd[1611]: 2026-01-14 01:22:03.707 [INFO][4718] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="173d3c4c65ae5fd7b6d52fa7ab3ad1372a1d06560882f57c73526d340b345d3a" HandleID="k8s-pod-network.173d3c4c65ae5fd7b6d52fa7ab3ad1372a1d06560882f57c73526d340b345d3a" Workload="localhost-k8s-coredns--674b8bbfcf--7w7jk-eth0" Jan 14 01:22:03.809108 containerd[1611]: 2026-01-14 01:22:03.720 [INFO][4678] cni-plugin/k8s.go 418: Populated endpoint ContainerID="173d3c4c65ae5fd7b6d52fa7ab3ad1372a1d06560882f57c73526d340b345d3a" Namespace="kube-system" Pod="coredns-674b8bbfcf-7w7jk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7w7jk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--7w7jk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"04319d9b-642a-43be-8c0b-1ecdc12ac533", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 21, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-7w7jk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5cbe1f72129", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:22:03.809108 containerd[1611]: 2026-01-14 01:22:03.720 [INFO][4678] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="173d3c4c65ae5fd7b6d52fa7ab3ad1372a1d06560882f57c73526d340b345d3a" Namespace="kube-system" Pod="coredns-674b8bbfcf-7w7jk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7w7jk-eth0" Jan 14 01:22:03.809108 containerd[1611]: 2026-01-14 01:22:03.720 [INFO][4678] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5cbe1f72129 ContainerID="173d3c4c65ae5fd7b6d52fa7ab3ad1372a1d06560882f57c73526d340b345d3a" Namespace="kube-system" Pod="coredns-674b8bbfcf-7w7jk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7w7jk-eth0" Jan 14 01:22:03.809108 containerd[1611]: 2026-01-14 01:22:03.730 [INFO][4678] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="173d3c4c65ae5fd7b6d52fa7ab3ad1372a1d06560882f57c73526d340b345d3a" Namespace="kube-system" Pod="coredns-674b8bbfcf-7w7jk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7w7jk-eth0" Jan 14 01:22:03.809108 containerd[1611]: 2026-01-14 01:22:03.732 [INFO][4678] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="173d3c4c65ae5fd7b6d52fa7ab3ad1372a1d06560882f57c73526d340b345d3a" Namespace="kube-system" Pod="coredns-674b8bbfcf-7w7jk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7w7jk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--7w7jk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"04319d9b-642a-43be-8c0b-1ecdc12ac533", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 21, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"173d3c4c65ae5fd7b6d52fa7ab3ad1372a1d06560882f57c73526d340b345d3a", Pod:"coredns-674b8bbfcf-7w7jk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5cbe1f72129", MAC:"4a:a2:cf:3c:ab:50", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:22:03.809108 containerd[1611]: 2026-01-14 01:22:03.771 [INFO][4678] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="173d3c4c65ae5fd7b6d52fa7ab3ad1372a1d06560882f57c73526d340b345d3a" Namespace="kube-system" Pod="coredns-674b8bbfcf-7w7jk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7w7jk-eth0" Jan 14 01:22:03.866000 audit[4804]: NETFILTER_CFG table=filter:138 family=2 entries=54 op=nft_register_chain pid=4804 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:22:03.866000 audit[4804]: SYSCALL arch=c000003e syscall=46 success=yes exit=25556 a0=3 a1=7ffeade7b000 a2=0 a3=7ffeade7afec items=0 ppid=4021 pid=4804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:03.866000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:22:03.871218 containerd[1611]: time="2026-01-14T01:22:03.869773319Z" level=info msg="connecting to shim 173d3c4c65ae5fd7b6d52fa7ab3ad1372a1d06560882f57c73526d340b345d3a" address="unix:///run/containerd/s/33f6ba5ad924c81a3797de287dda929ad6d5a64f79c23d74ca831cfa287a1f94" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:22:03.872491 systemd-networkd[1516]: cali3f26e8eb2b0: Link UP Jan 14 01:22:03.873871 systemd-networkd[1516]: cali3f26e8eb2b0: Gained carrier Jan 14 01:22:03.886417 containerd[1611]: time="2026-01-14T01:22:03.886378138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c99cb9d5d-kj4q4,Uid:2e6b76b0-bbf3-4bda-8c0a-ac8224558858,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"50b4fca4ea2facced9afea163cecaa231021d588e739b49d377d9d68cbce87a7\"" Jan 14 01:22:03.895584 containerd[1611]: time="2026-01-14T01:22:03.895448936Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:22:03.910699 containerd[1611]: 2026-01-14 01:22:03.490 [INFO][4672] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--qqxnp-eth0 csi-node-driver- calico-system ba3d93c2-390e-4ba5-bb19-4864194c73f7 723 0 2026-01-14 01:21:39 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-qqxnp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3f26e8eb2b0 [] [] }} ContainerID="6bad2513f7b3b9e67e7201718404c365d6298b139cc8857442b69a2e8ba0db8d" Namespace="calico-system" Pod="csi-node-driver-qqxnp" WorkloadEndpoint="localhost-k8s-csi--node--driver--qqxnp-" Jan 14 01:22:03.910699 containerd[1611]: 2026-01-14 01:22:03.490 [INFO][4672] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6bad2513f7b3b9e67e7201718404c365d6298b139cc8857442b69a2e8ba0db8d" Namespace="calico-system" Pod="csi-node-driver-qqxnp" WorkloadEndpoint="localhost-k8s-csi--node--driver--qqxnp-eth0" Jan 14 01:22:03.910699 containerd[1611]: 2026-01-14 01:22:03.534 [INFO][4726] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6bad2513f7b3b9e67e7201718404c365d6298b139cc8857442b69a2e8ba0db8d" HandleID="k8s-pod-network.6bad2513f7b3b9e67e7201718404c365d6298b139cc8857442b69a2e8ba0db8d" Workload="localhost-k8s-csi--node--driver--qqxnp-eth0" Jan 14 01:22:03.910699 containerd[1611]: 2026-01-14 01:22:03.535 [INFO][4726] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6bad2513f7b3b9e67e7201718404c365d6298b139cc8857442b69a2e8ba0db8d" HandleID="k8s-pod-network.6bad2513f7b3b9e67e7201718404c365d6298b139cc8857442b69a2e8ba0db8d" Workload="localhost-k8s-csi--node--driver--qqxnp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325760), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-qqxnp", "timestamp":"2026-01-14 01:22:03.53486248 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:22:03.910699 containerd[1611]: 2026-01-14 01:22:03.535 [INFO][4726] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:22:03.910699 containerd[1611]: 2026-01-14 01:22:03.707 [INFO][4726] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:22:03.910699 containerd[1611]: 2026-01-14 01:22:03.709 [INFO][4726] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 01:22:03.910699 containerd[1611]: 2026-01-14 01:22:03.741 [INFO][4726] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6bad2513f7b3b9e67e7201718404c365d6298b139cc8857442b69a2e8ba0db8d" host="localhost" Jan 14 01:22:03.910699 containerd[1611]: 2026-01-14 01:22:03.786 [INFO][4726] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 01:22:03.910699 containerd[1611]: 2026-01-14 01:22:03.793 [INFO][4726] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 01:22:03.910699 containerd[1611]: 2026-01-14 01:22:03.801 [INFO][4726] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 01:22:03.910699 containerd[1611]: 2026-01-14 01:22:03.809 [INFO][4726] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 01:22:03.910699 containerd[1611]: 2026-01-14 01:22:03.809 [INFO][4726] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6bad2513f7b3b9e67e7201718404c365d6298b139cc8857442b69a2e8ba0db8d" host="localhost" Jan 14 01:22:03.910699 containerd[1611]: 2026-01-14 01:22:03.811 [INFO][4726] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6bad2513f7b3b9e67e7201718404c365d6298b139cc8857442b69a2e8ba0db8d Jan 14 01:22:03.910699 containerd[1611]: 2026-01-14 01:22:03.828 [INFO][4726] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6bad2513f7b3b9e67e7201718404c365d6298b139cc8857442b69a2e8ba0db8d" host="localhost" Jan 14 01:22:03.910699 containerd[1611]: 2026-01-14 01:22:03.852 [INFO][4726] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.6bad2513f7b3b9e67e7201718404c365d6298b139cc8857442b69a2e8ba0db8d" host="localhost" Jan 14 01:22:03.910699 containerd[1611]: 2026-01-14 01:22:03.853 [INFO][4726] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.6bad2513f7b3b9e67e7201718404c365d6298b139cc8857442b69a2e8ba0db8d" host="localhost" Jan 14 01:22:03.910699 containerd[1611]: 2026-01-14 01:22:03.853 [INFO][4726] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:22:03.910699 containerd[1611]: 2026-01-14 01:22:03.853 [INFO][4726] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="6bad2513f7b3b9e67e7201718404c365d6298b139cc8857442b69a2e8ba0db8d" HandleID="k8s-pod-network.6bad2513f7b3b9e67e7201718404c365d6298b139cc8857442b69a2e8ba0db8d" Workload="localhost-k8s-csi--node--driver--qqxnp-eth0" Jan 14 01:22:03.911629 containerd[1611]: 2026-01-14 01:22:03.857 [INFO][4672] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6bad2513f7b3b9e67e7201718404c365d6298b139cc8857442b69a2e8ba0db8d" Namespace="calico-system" Pod="csi-node-driver-qqxnp" WorkloadEndpoint="localhost-k8s-csi--node--driver--qqxnp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qqxnp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ba3d93c2-390e-4ba5-bb19-4864194c73f7", ResourceVersion:"723", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 21, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-qqxnp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3f26e8eb2b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:22:03.911629 containerd[1611]: 2026-01-14 01:22:03.857 [INFO][4672] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="6bad2513f7b3b9e67e7201718404c365d6298b139cc8857442b69a2e8ba0db8d" Namespace="calico-system" Pod="csi-node-driver-qqxnp" WorkloadEndpoint="localhost-k8s-csi--node--driver--qqxnp-eth0" Jan 14 01:22:03.911629 containerd[1611]: 2026-01-14 01:22:03.859 [INFO][4672] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3f26e8eb2b0 ContainerID="6bad2513f7b3b9e67e7201718404c365d6298b139cc8857442b69a2e8ba0db8d" Namespace="calico-system" Pod="csi-node-driver-qqxnp" WorkloadEndpoint="localhost-k8s-csi--node--driver--qqxnp-eth0" Jan 14 01:22:03.911629 containerd[1611]: 2026-01-14 01:22:03.879 [INFO][4672] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6bad2513f7b3b9e67e7201718404c365d6298b139cc8857442b69a2e8ba0db8d" Namespace="calico-system" Pod="csi-node-driver-qqxnp" WorkloadEndpoint="localhost-k8s-csi--node--driver--qqxnp-eth0" Jan 14 01:22:03.911629 containerd[1611]: 2026-01-14 01:22:03.880 [INFO][4672] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6bad2513f7b3b9e67e7201718404c365d6298b139cc8857442b69a2e8ba0db8d" Namespace="calico-system" Pod="csi-node-driver-qqxnp" WorkloadEndpoint="localhost-k8s-csi--node--driver--qqxnp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qqxnp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ba3d93c2-390e-4ba5-bb19-4864194c73f7", ResourceVersion:"723", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 21, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6bad2513f7b3b9e67e7201718404c365d6298b139cc8857442b69a2e8ba0db8d", Pod:"csi-node-driver-qqxnp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3f26e8eb2b0", MAC:"8e:7c:dd:29:62:b6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:22:03.911629 containerd[1611]: 2026-01-14 01:22:03.904 [INFO][4672] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6bad2513f7b3b9e67e7201718404c365d6298b139cc8857442b69a2e8ba0db8d" Namespace="calico-system" Pod="csi-node-driver-qqxnp" WorkloadEndpoint="localhost-k8s-csi--node--driver--qqxnp-eth0" Jan 14 01:22:03.921990 systemd[1]: Started cri-containerd-173d3c4c65ae5fd7b6d52fa7ab3ad1372a1d06560882f57c73526d340b345d3a.scope - libcontainer container 173d3c4c65ae5fd7b6d52fa7ab3ad1372a1d06560882f57c73526d340b345d3a. Jan 14 01:22:03.929000 audit[4847]: NETFILTER_CFG table=filter:139 family=2 entries=52 op=nft_register_chain pid=4847 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:22:03.929000 audit[4847]: SYSCALL arch=c000003e syscall=46 success=yes exit=24296 a0=3 a1=7ffdeda02080 a2=0 a3=7ffdeda0206c items=0 ppid=4021 pid=4847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:03.929000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:22:03.940000 audit: BPF prog-id=241 op=LOAD Jan 14 01:22:03.941000 audit: BPF prog-id=242 op=LOAD Jan 14 01:22:03.941000 audit[4827]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=4809 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:03.941000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137336433633463363561653566643762366435326661376162336164 Jan 14 01:22:03.941000 audit: BPF prog-id=242 op=UNLOAD Jan 14 01:22:03.941000 audit[4827]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4809 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:03.941000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137336433633463363561653566643762366435326661376162336164 Jan 14 01:22:03.941000 audit: BPF prog-id=243 op=LOAD Jan 14 01:22:03.941000 audit[4827]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=4809 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:03.941000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137336433633463363561653566643762366435326661376162336164 Jan 14 01:22:03.941000 audit: BPF prog-id=244 op=LOAD Jan 14 01:22:03.941000 audit[4827]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=4809 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:03.941000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137336433633463363561653566643762366435326661376162336164 Jan 14 01:22:03.941000 audit: BPF prog-id=244 op=UNLOAD Jan 14 01:22:03.941000 audit[4827]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4809 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:03.941000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137336433633463363561653566643762366435326661376162336164 Jan 14 01:22:03.941000 audit: BPF prog-id=243 op=UNLOAD Jan 14 01:22:03.941000 audit[4827]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4809 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:03.941000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137336433633463363561653566643762366435326661376162336164 Jan 14 01:22:03.941000 audit: BPF prog-id=245 op=LOAD Jan 14 01:22:03.941000 audit[4827]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=4809 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:03.941000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137336433633463363561653566643762366435326661376162336164 Jan 14 01:22:03.946072 systemd-resolved[1295]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 01:22:03.946883 containerd[1611]: time="2026-01-14T01:22:03.946808906Z" level=info msg="connecting to shim 6bad2513f7b3b9e67e7201718404c365d6298b139cc8857442b69a2e8ba0db8d" address="unix:///run/containerd/s/c4bdffa5e633dd68b30d9a725808f1565ff22c8856bc16b4eea56447dd8b9faa" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:22:03.955139 containerd[1611]: time="2026-01-14T01:22:03.955106040Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:22:03.957210 containerd[1611]: time="2026-01-14T01:22:03.957167627Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:22:03.957630 containerd[1611]: time="2026-01-14T01:22:03.957267401Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:22:03.957682 kubelet[2787]: E0114 01:22:03.957600 2787 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:22:03.957682 kubelet[2787]: E0114 01:22:03.957662 2787 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:22:03.957979 kubelet[2787]: E0114 01:22:03.957828 2787 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hnvwj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c99cb9d5d-kj4q4_calico-apiserver(2e6b76b0-bbf3-4bda-8c0a-ac8224558858): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:22:03.959563 kubelet[2787]: E0114 01:22:03.959384 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c99cb9d5d-kj4q4" podUID="2e6b76b0-bbf3-4bda-8c0a-ac8224558858" Jan 14 01:22:03.993477 systemd[1]: Started cri-containerd-6bad2513f7b3b9e67e7201718404c365d6298b139cc8857442b69a2e8ba0db8d.scope - libcontainer container 6bad2513f7b3b9e67e7201718404c365d6298b139cc8857442b69a2e8ba0db8d. Jan 14 01:22:04.004584 containerd[1611]: time="2026-01-14T01:22:04.004486272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7w7jk,Uid:04319d9b-642a-43be-8c0b-1ecdc12ac533,Namespace:kube-system,Attempt:0,} returns sandbox id \"173d3c4c65ae5fd7b6d52fa7ab3ad1372a1d06560882f57c73526d340b345d3a\"" Jan 14 01:22:04.006466 kubelet[2787]: E0114 01:22:04.006348 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:22:04.012854 containerd[1611]: time="2026-01-14T01:22:04.012825628Z" level=info msg="CreateContainer within sandbox \"173d3c4c65ae5fd7b6d52fa7ab3ad1372a1d06560882f57c73526d340b345d3a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 01:22:04.017000 audit: BPF prog-id=246 op=LOAD Jan 14 01:22:04.017000 audit: BPF prog-id=247 op=LOAD Jan 14 01:22:04.017000 audit[4875]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4863 pid=4875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:04.017000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662616432353133663762336239653637653732303137313834303463 Jan 14 01:22:04.018000 audit: BPF prog-id=247 op=UNLOAD Jan 14 01:22:04.018000 audit[4875]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4863 pid=4875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:04.018000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662616432353133663762336239653637653732303137313834303463 Jan 14 01:22:04.018000 audit: BPF prog-id=248 op=LOAD Jan 14 01:22:04.018000 audit[4875]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4863 pid=4875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:04.018000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662616432353133663762336239653637653732303137313834303463 Jan 14 01:22:04.018000 audit: BPF prog-id=249 op=LOAD Jan 14 01:22:04.018000 audit[4875]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4863 pid=4875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:04.018000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662616432353133663762336239653637653732303137313834303463 Jan 14 01:22:04.018000 audit: BPF prog-id=249 op=UNLOAD Jan 14 01:22:04.018000 audit[4875]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4863 pid=4875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:04.018000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662616432353133663762336239653637653732303137313834303463 Jan 14 01:22:04.018000 audit: BPF prog-id=248 op=UNLOAD Jan 14 01:22:04.018000 audit[4875]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4863 pid=4875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:04.018000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662616432353133663762336239653637653732303137313834303463 Jan 14 01:22:04.018000 audit: BPF prog-id=250 op=LOAD Jan 14 01:22:04.018000 audit[4875]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4863 pid=4875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:04.018000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662616432353133663762336239653637653732303137313834303463 Jan 14 01:22:04.021446 systemd-resolved[1295]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 01:22:04.024646 containerd[1611]: time="2026-01-14T01:22:04.024189793Z" level=info msg="Container fcec0e8bb47fb6af6c6d14380d7bd52894a7389ac615911cff8abc983418451d: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:22:04.035798 containerd[1611]: time="2026-01-14T01:22:04.035732367Z" level=info msg="CreateContainer within sandbox \"173d3c4c65ae5fd7b6d52fa7ab3ad1372a1d06560882f57c73526d340b345d3a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fcec0e8bb47fb6af6c6d14380d7bd52894a7389ac615911cff8abc983418451d\"" Jan 14 01:22:04.036997 containerd[1611]: time="2026-01-14T01:22:04.036744471Z" level=info msg="StartContainer for \"fcec0e8bb47fb6af6c6d14380d7bd52894a7389ac615911cff8abc983418451d\"" Jan 14 01:22:04.045274 containerd[1611]: time="2026-01-14T01:22:04.045130645Z" level=info msg="connecting to shim fcec0e8bb47fb6af6c6d14380d7bd52894a7389ac615911cff8abc983418451d" address="unix:///run/containerd/s/33f6ba5ad924c81a3797de287dda929ad6d5a64f79c23d74ca831cfa287a1f94" protocol=ttrpc version=3 Jan 14 01:22:04.049399 containerd[1611]: time="2026-01-14T01:22:04.049327955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qqxnp,Uid:ba3d93c2-390e-4ba5-bb19-4864194c73f7,Namespace:calico-system,Attempt:0,} returns sandbox id \"6bad2513f7b3b9e67e7201718404c365d6298b139cc8857442b69a2e8ba0db8d\"" Jan 14 01:22:04.051857 containerd[1611]: time="2026-01-14T01:22:04.051810255Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:22:04.094954 systemd[1]: Started cri-containerd-fcec0e8bb47fb6af6c6d14380d7bd52894a7389ac615911cff8abc983418451d.scope - libcontainer container fcec0e8bb47fb6af6c6d14380d7bd52894a7389ac615911cff8abc983418451d. Jan 14 01:22:04.113690 containerd[1611]: time="2026-01-14T01:22:04.113487688Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:22:04.115203 containerd[1611]: time="2026-01-14T01:22:04.115161171Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:22:04.115491 containerd[1611]: time="2026-01-14T01:22:04.115230278Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:22:04.115883 kubelet[2787]: E0114 01:22:04.115656 2787 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:22:04.115883 kubelet[2787]: E0114 01:22:04.115713 2787 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:22:04.115994 kubelet[2787]: E0114 01:22:04.115904 2787 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d9ggb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qqxnp_calico-system(ba3d93c2-390e-4ba5-bb19-4864194c73f7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:22:04.118590 containerd[1611]: time="2026-01-14T01:22:04.118221789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:22:04.120000 audit: BPF prog-id=251 op=LOAD Jan 14 01:22:04.121000 audit: BPF prog-id=252 op=LOAD Jan 14 01:22:04.121000 audit[4905]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4809 pid=4905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:04.121000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663656330653862623437666236616636633664313433383064376264 Jan 14 01:22:04.121000 audit: BPF prog-id=252 op=UNLOAD Jan 14 01:22:04.121000 audit[4905]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4809 pid=4905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:04.121000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663656330653862623437666236616636633664313433383064376264 Jan 14 01:22:04.121000 audit: BPF prog-id=253 op=LOAD Jan 14 01:22:04.121000 audit[4905]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4809 pid=4905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:04.121000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663656330653862623437666236616636633664313433383064376264 Jan 14 01:22:04.121000 audit: BPF prog-id=254 op=LOAD Jan 14 01:22:04.121000 audit[4905]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4809 pid=4905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:04.121000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663656330653862623437666236616636633664313433383064376264 Jan 14 01:22:04.121000 audit: BPF prog-id=254 op=UNLOAD Jan 14 01:22:04.121000 audit[4905]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4809 pid=4905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:04.121000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663656330653862623437666236616636633664313433383064376264 Jan 14 01:22:04.121000 audit: BPF prog-id=253 op=UNLOAD Jan 14 01:22:04.121000 audit[4905]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4809 pid=4905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:04.121000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663656330653862623437666236616636633664313433383064376264 Jan 14 01:22:04.121000 audit: BPF prog-id=255 op=LOAD Jan 14 01:22:04.121000 audit[4905]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4809 pid=4905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:04.121000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663656330653862623437666236616636633664313433383064376264 Jan 14 01:22:04.166739 containerd[1611]: time="2026-01-14T01:22:04.166609502Z" level=info msg="StartContainer for \"fcec0e8bb47fb6af6c6d14380d7bd52894a7389ac615911cff8abc983418451d\" returns successfully" Jan 14 01:22:04.206364 containerd[1611]: time="2026-01-14T01:22:04.206068277Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:22:04.208641 containerd[1611]: time="2026-01-14T01:22:04.208555109Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:22:04.208641 containerd[1611]: time="2026-01-14T01:22:04.208593451Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:22:04.209814 kubelet[2787]: E0114 01:22:04.209771 2787 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:22:04.209950 kubelet[2787]: E0114 01:22:04.209933 2787 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:22:04.211317 kubelet[2787]: E0114 01:22:04.211224 2787 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d9ggb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qqxnp_calico-system(ba3d93c2-390e-4ba5-bb19-4864194c73f7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:22:04.212643 kubelet[2787]: E0114 01:22:04.212590 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qqxnp" podUID="ba3d93c2-390e-4ba5-bb19-4864194c73f7" Jan 14 01:22:04.686283 kubelet[2787]: E0114 01:22:04.686025 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:22:04.692136 kubelet[2787]: E0114 01:22:04.691976 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c99cb9d5d-kj4q4" podUID="2e6b76b0-bbf3-4bda-8c0a-ac8224558858" Jan 14 01:22:04.694206 kubelet[2787]: E0114 01:22:04.694059 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qqxnp" podUID="ba3d93c2-390e-4ba5-bb19-4864194c73f7" Jan 14 01:22:04.706919 kubelet[2787]: I0114 01:22:04.706705 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-7w7jk" podStartSLOduration=42.706686556 podStartE2EDuration="42.706686556s" podCreationTimestamp="2026-01-14 01:21:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:22:04.705641929 +0000 UTC m=+48.483209014" watchObservedRunningTime="2026-01-14 01:22:04.706686556 +0000 UTC m=+48.484253621" Jan 14 01:22:04.742000 audit[4944]: NETFILTER_CFG table=filter:140 family=2 entries=14 op=nft_register_rule pid=4944 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:22:04.742000 audit[4944]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc47ffb0e0 a2=0 a3=7ffc47ffb0cc items=0 ppid=2947 pid=4944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:04.742000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:22:04.758000 audit[4944]: NETFILTER_CFG table=nat:141 family=2 entries=44 op=nft_register_rule pid=4944 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:22:04.758000 audit[4944]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc47ffb0e0 a2=0 a3=7ffc47ffb0cc items=0 ppid=2947 pid=4944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:04.758000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:22:04.787000 audit[4946]: NETFILTER_CFG table=filter:142 family=2 entries=14 op=nft_register_rule pid=4946 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:22:04.787000 audit[4946]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd16e7dc80 a2=0 a3=7ffd16e7dc6c items=0 ppid=2947 pid=4946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:04.787000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:22:04.808000 audit[4946]: NETFILTER_CFG table=nat:143 family=2 entries=56 op=nft_register_chain pid=4946 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:22:04.808000 audit[4946]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffd16e7dc80 a2=0 a3=7ffd16e7dc6c items=0 ppid=2947 pid=4946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:04.808000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:22:05.275921 systemd-networkd[1516]: cali560f6a27610: Gained IPv6LL Jan 14 01:22:05.596013 systemd-networkd[1516]: cali3f26e8eb2b0: Gained IPv6LL Jan 14 01:22:05.696449 kubelet[2787]: E0114 01:22:05.696387 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c99cb9d5d-kj4q4" podUID="2e6b76b0-bbf3-4bda-8c0a-ac8224558858" Jan 14 01:22:05.697255 kubelet[2787]: E0114 01:22:05.696726 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:22:05.698265 kubelet[2787]: E0114 01:22:05.698049 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qqxnp" podUID="ba3d93c2-390e-4ba5-bb19-4864194c73f7" Jan 14 01:22:05.724012 systemd-networkd[1516]: cali5cbe1f72129: Gained IPv6LL Jan 14 01:22:06.698227 kubelet[2787]: E0114 01:22:06.698140 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:22:13.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.134:22-10.0.0.1:40982 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:13.844867 systemd[1]: Started sshd@7-10.0.0.134:22-10.0.0.1:40982.service - OpenSSH per-connection server daemon (10.0.0.1:40982). Jan 14 01:22:13.848561 kernel: kauditd_printk_skb: 186 callbacks suppressed Jan 14 01:22:13.848628 kernel: audit: type=1130 audit(1768353733.844:741): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.134:22-10.0.0.1:40982 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:13.980000 audit[4961]: USER_ACCT pid=4961 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:13.981266 sshd[4961]: Accepted publickey for core from 10.0.0.1 port 40982 ssh2: RSA SHA256:3qGrMVfuhKNIe5rlCK8c/D9IY3u9YaQGWBapsCdNUS0 Jan 14 01:22:13.985203 sshd-session[4961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:22:13.982000 audit[4961]: CRED_ACQ pid=4961 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:13.993658 systemd-logind[1587]: New session 9 of user core. Jan 14 01:22:14.003995 kernel: audit: type=1101 audit(1768353733.980:742): pid=4961 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:14.004178 kernel: audit: type=1103 audit(1768353733.982:743): pid=4961 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:14.004211 kernel: audit: type=1006 audit(1768353733.982:744): pid=4961 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 14 01:22:13.982000 audit[4961]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc38f7fd30 a2=3 a3=0 items=0 ppid=1 pid=4961 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:14.019598 kernel: audit: type=1300 audit(1768353733.982:744): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc38f7fd30 a2=3 a3=0 items=0 ppid=1 pid=4961 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:14.019740 kernel: audit: type=1327 audit(1768353733.982:744): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:13.982000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:14.031146 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 14 01:22:14.034000 audit[4961]: USER_START pid=4961 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:14.037000 audit[4967]: CRED_ACQ pid=4967 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:14.056763 kernel: audit: type=1105 audit(1768353734.034:745): pid=4961 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:14.056924 kernel: audit: type=1103 audit(1768353734.037:746): pid=4967 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:14.167182 sshd[4967]: Connection closed by 10.0.0.1 port 40982 Jan 14 01:22:14.167253 sshd-session[4961]: pam_unix(sshd:session): session closed for user core Jan 14 01:22:14.168000 audit[4961]: USER_END pid=4961 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:14.172442 systemd[1]: sshd@7-10.0.0.134:22-10.0.0.1:40982.service: Deactivated successfully. Jan 14 01:22:14.175886 systemd[1]: session-9.scope: Deactivated successfully. Jan 14 01:22:14.178776 systemd-logind[1587]: Session 9 logged out. Waiting for processes to exit. Jan 14 01:22:14.181085 systemd-logind[1587]: Removed session 9. Jan 14 01:22:14.169000 audit[4961]: CRED_DISP pid=4961 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:14.194105 kernel: audit: type=1106 audit(1768353734.168:747): pid=4961 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:14.194239 kernel: audit: type=1104 audit(1768353734.169:748): pid=4961 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:14.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.134:22-10.0.0.1:40982 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:14.394430 containerd[1611]: time="2026-01-14T01:22:14.394258080Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:22:14.472009 containerd[1611]: time="2026-01-14T01:22:14.471785823Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:22:14.473792 containerd[1611]: time="2026-01-14T01:22:14.473745422Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:22:14.474007 containerd[1611]: time="2026-01-14T01:22:14.473837682Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:22:14.474195 kubelet[2787]: E0114 01:22:14.474131 2787 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:22:14.474624 kubelet[2787]: E0114 01:22:14.474199 2787 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:22:14.474624 kubelet[2787]: E0114 01:22:14.474372 2787 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xm9wn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-x2sz9_calico-system(09905137-6883-4a25-b76e-d0608b4b6347): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:22:14.476455 kubelet[2787]: E0114 01:22:14.476415 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x2sz9" podUID="09905137-6883-4a25-b76e-d0608b4b6347" Jan 14 01:22:15.396094 containerd[1611]: time="2026-01-14T01:22:15.395061671Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:22:15.467346 containerd[1611]: time="2026-01-14T01:22:15.467094424Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:22:15.469313 containerd[1611]: time="2026-01-14T01:22:15.469165052Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:22:15.469313 containerd[1611]: time="2026-01-14T01:22:15.469226887Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:22:15.469677 kubelet[2787]: E0114 01:22:15.469606 2787 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:22:15.469816 kubelet[2787]: E0114 01:22:15.469683 2787 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:22:15.470265 containerd[1611]: time="2026-01-14T01:22:15.470121438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:22:15.470697 kubelet[2787]: E0114 01:22:15.470473 2787 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a68a300fc64e45a2b1bba454e6e6db2f,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wcbl8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-c69b4ddbc-mp7cc_calico-system(0c6080b7-a312-4044-afca-8c80fd4d65bc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:22:15.529825 containerd[1611]: time="2026-01-14T01:22:15.529739347Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:22:15.531269 containerd[1611]: time="2026-01-14T01:22:15.531161787Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:22:15.531269 containerd[1611]: time="2026-01-14T01:22:15.531232479Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:22:15.531561 kubelet[2787]: E0114 01:22:15.531390 2787 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:22:15.531561 kubelet[2787]: E0114 01:22:15.531458 2787 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:22:15.532138 kubelet[2787]: E0114 01:22:15.531782 2787 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pgsfz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-59555f9565-zxzlc_calico-system(7724ac30-d973-433e-90c7-10adfa17a249): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:22:15.532315 containerd[1611]: time="2026-01-14T01:22:15.532007039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:22:15.533964 kubelet[2787]: E0114 01:22:15.533785 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59555f9565-zxzlc" podUID="7724ac30-d973-433e-90c7-10adfa17a249" Jan 14 01:22:15.597278 containerd[1611]: time="2026-01-14T01:22:15.596872602Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:22:15.598783 containerd[1611]: time="2026-01-14T01:22:15.598706302Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:22:15.598853 containerd[1611]: time="2026-01-14T01:22:15.598799924Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:22:15.599151 kubelet[2787]: E0114 01:22:15.599077 2787 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:22:15.599224 kubelet[2787]: E0114 01:22:15.599160 2787 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:22:15.599470 kubelet[2787]: E0114 01:22:15.599325 2787 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wcbl8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-c69b4ddbc-mp7cc_calico-system(0c6080b7-a312-4044-afca-8c80fd4d65bc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:22:15.600753 kubelet[2787]: E0114 01:22:15.600666 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c69b4ddbc-mp7cc" podUID="0c6080b7-a312-4044-afca-8c80fd4d65bc" Jan 14 01:22:16.396255 containerd[1611]: time="2026-01-14T01:22:16.396090060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:22:16.461927 containerd[1611]: time="2026-01-14T01:22:16.461776543Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:22:16.463553 containerd[1611]: time="2026-01-14T01:22:16.463461800Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:22:16.463794 containerd[1611]: time="2026-01-14T01:22:16.463619289Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:22:16.464055 kubelet[2787]: E0114 01:22:16.463927 2787 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:22:16.464122 kubelet[2787]: E0114 01:22:16.464052 2787 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:22:16.464332 kubelet[2787]: E0114 01:22:16.464227 2787 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hnvwj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c99cb9d5d-kj4q4_calico-apiserver(2e6b76b0-bbf3-4bda-8c0a-ac8224558858): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:22:16.465868 kubelet[2787]: E0114 01:22:16.465575 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c99cb9d5d-kj4q4" podUID="2e6b76b0-bbf3-4bda-8c0a-ac8224558858" Jan 14 01:22:17.395835 containerd[1611]: time="2026-01-14T01:22:17.395753076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:22:17.455880 containerd[1611]: time="2026-01-14T01:22:17.455756333Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:22:17.457343 containerd[1611]: time="2026-01-14T01:22:17.457260143Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:22:17.457400 containerd[1611]: time="2026-01-14T01:22:17.457365149Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:22:17.457817 kubelet[2787]: E0114 01:22:17.457735 2787 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:22:17.457817 kubelet[2787]: E0114 01:22:17.457808 2787 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:22:17.458295 kubelet[2787]: E0114 01:22:17.457979 2787 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sbg5p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c99cb9d5d-jz6rb_calico-apiserver(1bd888e4-98c6-46dd-883e-12946740dfe2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:22:17.460020 kubelet[2787]: E0114 01:22:17.459871 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c99cb9d5d-jz6rb" podUID="1bd888e4-98c6-46dd-883e-12946740dfe2" Jan 14 01:22:19.186660 systemd[1]: Started sshd@8-10.0.0.134:22-10.0.0.1:40988.service - OpenSSH per-connection server daemon (10.0.0.1:40988). Jan 14 01:22:19.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.134:22-10.0.0.1:40988 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:19.189631 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:22:19.189711 kernel: audit: type=1130 audit(1768353739.185:750): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.134:22-10.0.0.1:40988 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:19.270000 audit[4991]: USER_ACCT pid=4991 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:19.272122 sshd[4991]: Accepted publickey for core from 10.0.0.1 port 40988 ssh2: RSA SHA256:3qGrMVfuhKNIe5rlCK8c/D9IY3u9YaQGWBapsCdNUS0 Jan 14 01:22:19.274344 sshd-session[4991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:22:19.280667 systemd-logind[1587]: New session 10 of user core. Jan 14 01:22:19.271000 audit[4991]: CRED_ACQ pid=4991 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:19.298152 kernel: audit: type=1101 audit(1768353739.270:751): pid=4991 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:19.298260 kernel: audit: type=1103 audit(1768353739.271:752): pid=4991 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:19.298278 kernel: audit: type=1006 audit(1768353739.271:753): pid=4991 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 14 01:22:19.304054 kernel: audit: type=1300 audit(1768353739.271:753): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc396d3030 a2=3 a3=0 items=0 ppid=1 pid=4991 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:19.271000 audit[4991]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc396d3030 a2=3 a3=0 items=0 ppid=1 pid=4991 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:19.271000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:19.319624 kernel: audit: type=1327 audit(1768353739.271:753): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:19.321968 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 14 01:22:19.324000 audit[4991]: USER_START pid=4991 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:19.326000 audit[4995]: CRED_ACQ pid=4995 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:19.353643 kernel: audit: type=1105 audit(1768353739.324:754): pid=4991 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:19.353739 kernel: audit: type=1103 audit(1768353739.326:755): pid=4995 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:19.419912 sshd[4995]: Connection closed by 10.0.0.1 port 40988 Jan 14 01:22:19.420297 sshd-session[4991]: pam_unix(sshd:session): session closed for user core Jan 14 01:22:19.421000 audit[4991]: USER_END pid=4991 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:19.425930 systemd[1]: sshd@8-10.0.0.134:22-10.0.0.1:40988.service: Deactivated successfully. Jan 14 01:22:19.428741 systemd[1]: session-10.scope: Deactivated successfully. Jan 14 01:22:19.430579 systemd-logind[1587]: Session 10 logged out. Waiting for processes to exit. Jan 14 01:22:19.432572 systemd-logind[1587]: Removed session 10. Jan 14 01:22:19.421000 audit[4991]: CRED_DISP pid=4991 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:19.448369 kernel: audit: type=1106 audit(1768353739.421:756): pid=4991 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:19.448478 kernel: audit: type=1104 audit(1768353739.421:757): pid=4991 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:19.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.134:22-10.0.0.1:40988 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:20.397125 containerd[1611]: time="2026-01-14T01:22:20.397073257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:22:20.458852 containerd[1611]: time="2026-01-14T01:22:20.458782709Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:22:20.460725 containerd[1611]: time="2026-01-14T01:22:20.460603823Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:22:20.460725 containerd[1611]: time="2026-01-14T01:22:20.460709641Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:22:20.461067 kubelet[2787]: E0114 01:22:20.460948 2787 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:22:20.461067 kubelet[2787]: E0114 01:22:20.461037 2787 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:22:20.461662 kubelet[2787]: E0114 01:22:20.461231 2787 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d9ggb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qqxnp_calico-system(ba3d93c2-390e-4ba5-bb19-4864194c73f7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:22:20.463779 containerd[1611]: time="2026-01-14T01:22:20.463682197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:22:20.547309 containerd[1611]: time="2026-01-14T01:22:20.547194310Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:22:20.548912 containerd[1611]: time="2026-01-14T01:22:20.548742396Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:22:20.548997 containerd[1611]: time="2026-01-14T01:22:20.548852085Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:22:20.549207 kubelet[2787]: E0114 01:22:20.549130 2787 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:22:20.549207 kubelet[2787]: E0114 01:22:20.549187 2787 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:22:20.549383 kubelet[2787]: E0114 01:22:20.549317 2787 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d9ggb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qqxnp_calico-system(ba3d93c2-390e-4ba5-bb19-4864194c73f7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:22:20.551094 kubelet[2787]: E0114 01:22:20.551036 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qqxnp" podUID="ba3d93c2-390e-4ba5-bb19-4864194c73f7" Jan 14 01:22:24.438846 systemd[1]: Started sshd@9-10.0.0.134:22-10.0.0.1:39680.service - OpenSSH per-connection server daemon (10.0.0.1:39680). Jan 14 01:22:24.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.134:22-10.0.0.1:39680 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:24.441685 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:22:24.441737 kernel: audit: type=1130 audit(1768353744.438:759): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.134:22-10.0.0.1:39680 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:24.517000 audit[5014]: USER_ACCT pid=5014 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:24.517956 sshd[5014]: Accepted publickey for core from 10.0.0.1 port 39680 ssh2: RSA SHA256:3qGrMVfuhKNIe5rlCK8c/D9IY3u9YaQGWBapsCdNUS0 Jan 14 01:22:24.521379 sshd-session[5014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:22:24.528331 systemd-logind[1587]: New session 11 of user core. Jan 14 01:22:24.519000 audit[5014]: CRED_ACQ pid=5014 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:24.545875 kernel: audit: type=1101 audit(1768353744.517:760): pid=5014 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:24.545960 kernel: audit: type=1103 audit(1768353744.519:761): pid=5014 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:24.545985 kernel: audit: type=1006 audit(1768353744.519:762): pid=5014 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 14 01:22:24.519000 audit[5014]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc213ff070 a2=3 a3=0 items=0 ppid=1 pid=5014 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:24.567001 kernel: audit: type=1300 audit(1768353744.519:762): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc213ff070 a2=3 a3=0 items=0 ppid=1 pid=5014 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:24.567078 kernel: audit: type=1327 audit(1768353744.519:762): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:24.519000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:24.580894 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 14 01:22:24.583000 audit[5014]: USER_START pid=5014 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:24.586000 audit[5018]: CRED_ACQ pid=5018 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:24.606838 kernel: audit: type=1105 audit(1768353744.583:763): pid=5014 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:24.606911 kernel: audit: type=1103 audit(1768353744.586:764): pid=5018 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:24.671704 sshd[5018]: Connection closed by 10.0.0.1 port 39680 Jan 14 01:22:24.672014 sshd-session[5014]: pam_unix(sshd:session): session closed for user core Jan 14 01:22:24.673000 audit[5014]: USER_END pid=5014 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:24.676845 systemd[1]: sshd@9-10.0.0.134:22-10.0.0.1:39680.service: Deactivated successfully. Jan 14 01:22:24.680244 systemd[1]: session-11.scope: Deactivated successfully. Jan 14 01:22:24.683272 systemd-logind[1587]: Session 11 logged out. Waiting for processes to exit. Jan 14 01:22:24.685625 systemd-logind[1587]: Removed session 11. Jan 14 01:22:24.673000 audit[5014]: CRED_DISP pid=5014 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:24.702068 kernel: audit: type=1106 audit(1768353744.673:765): pid=5014 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:24.702185 kernel: audit: type=1104 audit(1768353744.673:766): pid=5014 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:24.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.134:22-10.0.0.1:39680 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:25.394318 kubelet[2787]: E0114 01:22:25.394270 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x2sz9" podUID="09905137-6883-4a25-b76e-d0608b4b6347" Jan 14 01:22:27.394948 kubelet[2787]: E0114 01:22:27.394839 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59555f9565-zxzlc" podUID="7724ac30-d973-433e-90c7-10adfa17a249" Jan 14 01:22:28.403420 kubelet[2787]: E0114 01:22:28.403277 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c99cb9d5d-kj4q4" podUID="2e6b76b0-bbf3-4bda-8c0a-ac8224558858" Jan 14 01:22:28.780249 kubelet[2787]: E0114 01:22:28.780076 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:22:29.394455 kubelet[2787]: E0114 01:22:29.394269 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c99cb9d5d-jz6rb" podUID="1bd888e4-98c6-46dd-883e-12946740dfe2" Jan 14 01:22:29.698031 systemd[1]: Started sshd@10-10.0.0.134:22-10.0.0.1:39692.service - OpenSSH per-connection server daemon (10.0.0.1:39692). Jan 14 01:22:29.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.134:22-10.0.0.1:39692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:29.701140 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:22:29.701224 kernel: audit: type=1130 audit(1768353749.696:768): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.134:22-10.0.0.1:39692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:29.827000 audit[5058]: USER_ACCT pid=5058 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:29.829220 sshd[5058]: Accepted publickey for core from 10.0.0.1 port 39692 ssh2: RSA SHA256:3qGrMVfuhKNIe5rlCK8c/D9IY3u9YaQGWBapsCdNUS0 Jan 14 01:22:29.833333 sshd-session[5058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:22:29.845616 kernel: audit: type=1101 audit(1768353749.827:769): pid=5058 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:29.845733 kernel: audit: type=1103 audit(1768353749.827:770): pid=5058 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:29.827000 audit[5058]: CRED_ACQ pid=5058 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:29.847428 systemd-logind[1587]: New session 12 of user core. Jan 14 01:22:29.866804 kernel: audit: type=1006 audit(1768353749.827:771): pid=5058 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 14 01:22:29.867003 kernel: audit: type=1300 audit(1768353749.827:771): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec4e3af60 a2=3 a3=0 items=0 ppid=1 pid=5058 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:29.827000 audit[5058]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec4e3af60 a2=3 a3=0 items=0 ppid=1 pid=5058 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:29.827000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:29.890217 kernel: audit: type=1327 audit(1768353749.827:771): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:29.893210 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 14 01:22:29.901000 audit[5058]: USER_START pid=5058 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:29.904000 audit[5062]: CRED_ACQ pid=5062 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:29.935341 kernel: audit: type=1105 audit(1768353749.901:772): pid=5058 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:29.935492 kernel: audit: type=1103 audit(1768353749.904:773): pid=5062 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:30.055184 sshd[5062]: Connection closed by 10.0.0.1 port 39692 Jan 14 01:22:30.055681 sshd-session[5058]: pam_unix(sshd:session): session closed for user core Jan 14 01:22:30.056000 audit[5058]: USER_END pid=5058 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:30.057000 audit[5058]: CRED_DISP pid=5058 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:30.083861 kernel: audit: type=1106 audit(1768353750.056:774): pid=5058 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:30.083969 kernel: audit: type=1104 audit(1768353750.057:775): pid=5058 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:30.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.134:22-10.0.0.1:39692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:30.091625 systemd[1]: sshd@10-10.0.0.134:22-10.0.0.1:39692.service: Deactivated successfully. Jan 14 01:22:30.098308 systemd[1]: session-12.scope: Deactivated successfully. Jan 14 01:22:30.099895 systemd-logind[1587]: Session 12 logged out. Waiting for processes to exit. Jan 14 01:22:30.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.134:22-10.0.0.1:39706 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:30.104594 systemd[1]: Started sshd@11-10.0.0.134:22-10.0.0.1:39706.service - OpenSSH per-connection server daemon (10.0.0.1:39706). Jan 14 01:22:30.106306 systemd-logind[1587]: Removed session 12. Jan 14 01:22:30.183000 audit[5076]: USER_ACCT pid=5076 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:30.185817 sshd[5076]: Accepted publickey for core from 10.0.0.1 port 39706 ssh2: RSA SHA256:3qGrMVfuhKNIe5rlCK8c/D9IY3u9YaQGWBapsCdNUS0 Jan 14 01:22:30.185000 audit[5076]: CRED_ACQ pid=5076 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:30.185000 audit[5076]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd74e84e0 a2=3 a3=0 items=0 ppid=1 pid=5076 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:30.185000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:30.189039 sshd-session[5076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:22:30.201345 systemd-logind[1587]: New session 13 of user core. Jan 14 01:22:30.212907 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 14 01:22:30.216000 audit[5076]: USER_START pid=5076 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:30.220000 audit[5080]: CRED_ACQ pid=5080 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:30.369348 sshd[5080]: Connection closed by 10.0.0.1 port 39706 Jan 14 01:22:30.370416 sshd-session[5076]: pam_unix(sshd:session): session closed for user core Jan 14 01:22:30.373000 audit[5076]: USER_END pid=5076 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:30.374000 audit[5076]: CRED_DISP pid=5076 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:30.385051 systemd[1]: sshd@11-10.0.0.134:22-10.0.0.1:39706.service: Deactivated successfully. Jan 14 01:22:30.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.134:22-10.0.0.1:39706 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:30.391775 systemd[1]: session-13.scope: Deactivated successfully. Jan 14 01:22:30.394871 systemd-logind[1587]: Session 13 logged out. Waiting for processes to exit. Jan 14 01:22:30.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.134:22-10.0.0.1:39714 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:30.404461 systemd[1]: Started sshd@12-10.0.0.134:22-10.0.0.1:39714.service - OpenSSH per-connection server daemon (10.0.0.1:39714). Jan 14 01:22:30.407337 systemd-logind[1587]: Removed session 13. Jan 14 01:22:30.419614 kubelet[2787]: E0114 01:22:30.419012 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c69b4ddbc-mp7cc" podUID="0c6080b7-a312-4044-afca-8c80fd4d65bc" Jan 14 01:22:30.496000 audit[5091]: USER_ACCT pid=5091 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:30.498827 sshd[5091]: Accepted publickey for core from 10.0.0.1 port 39714 ssh2: RSA SHA256:3qGrMVfuhKNIe5rlCK8c/D9IY3u9YaQGWBapsCdNUS0 Jan 14 01:22:30.498000 audit[5091]: CRED_ACQ pid=5091 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:30.499000 audit[5091]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe6399dbf0 a2=3 a3=0 items=0 ppid=1 pid=5091 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:30.499000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:30.501872 sshd-session[5091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:22:30.509349 systemd-logind[1587]: New session 14 of user core. Jan 14 01:22:30.519014 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 14 01:22:30.521000 audit[5091]: USER_START pid=5091 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:30.523000 audit[5095]: CRED_ACQ pid=5095 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:30.632788 sshd[5095]: Connection closed by 10.0.0.1 port 39714 Jan 14 01:22:30.633150 sshd-session[5091]: pam_unix(sshd:session): session closed for user core Jan 14 01:22:30.634000 audit[5091]: USER_END pid=5091 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:30.634000 audit[5091]: CRED_DISP pid=5091 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:30.639968 systemd[1]: sshd@12-10.0.0.134:22-10.0.0.1:39714.service: Deactivated successfully. Jan 14 01:22:30.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.134:22-10.0.0.1:39714 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:30.642908 systemd[1]: session-14.scope: Deactivated successfully. Jan 14 01:22:30.644560 systemd-logind[1587]: Session 14 logged out. Waiting for processes to exit. Jan 14 01:22:30.647351 systemd-logind[1587]: Removed session 14. Jan 14 01:22:35.404332 kubelet[2787]: E0114 01:22:35.404242 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qqxnp" podUID="ba3d93c2-390e-4ba5-bb19-4864194c73f7" Jan 14 01:22:35.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.134:22-10.0.0.1:49686 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:35.656195 systemd[1]: Started sshd@13-10.0.0.134:22-10.0.0.1:49686.service - OpenSSH per-connection server daemon (10.0.0.1:49686). Jan 14 01:22:35.661703 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 14 01:22:35.676301 kernel: audit: type=1130 audit(1768353755.654:795): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.134:22-10.0.0.1:49686 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:35.779000 audit[5109]: USER_ACCT pid=5109 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:35.782913 sshd[5109]: Accepted publickey for core from 10.0.0.1 port 49686 ssh2: RSA SHA256:3qGrMVfuhKNIe5rlCK8c/D9IY3u9YaQGWBapsCdNUS0 Jan 14 01:22:35.796178 sshd-session[5109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:22:35.786000 audit[5109]: CRED_ACQ pid=5109 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:35.818163 systemd-logind[1587]: New session 15 of user core. Jan 14 01:22:35.826752 kernel: audit: type=1101 audit(1768353755.779:796): pid=5109 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:35.826835 kernel: audit: type=1103 audit(1768353755.786:797): pid=5109 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:35.836711 kernel: audit: type=1006 audit(1768353755.786:798): pid=5109 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 14 01:22:35.836820 kernel: audit: type=1300 audit(1768353755.786:798): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff6db03340 a2=3 a3=0 items=0 ppid=1 pid=5109 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:35.786000 audit[5109]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff6db03340 a2=3 a3=0 items=0 ppid=1 pid=5109 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:35.786000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:35.863575 kernel: audit: type=1327 audit(1768353755.786:798): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:35.861628 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 14 01:22:35.866000 audit[5109]: USER_START pid=5109 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:35.870000 audit[5113]: CRED_ACQ pid=5113 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:35.917943 kernel: audit: type=1105 audit(1768353755.866:799): pid=5109 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:35.918388 kernel: audit: type=1103 audit(1768353755.870:800): pid=5113 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:36.102868 sshd[5113]: Connection closed by 10.0.0.1 port 49686 Jan 14 01:22:36.103415 sshd-session[5109]: pam_unix(sshd:session): session closed for user core Jan 14 01:22:36.104000 audit[5109]: USER_END pid=5109 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:36.111752 systemd[1]: sshd@13-10.0.0.134:22-10.0.0.1:49686.service: Deactivated successfully. Jan 14 01:22:36.118944 systemd[1]: session-15.scope: Deactivated successfully. Jan 14 01:22:36.106000 audit[5109]: CRED_DISP pid=5109 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:36.125851 systemd-logind[1587]: Session 15 logged out. Waiting for processes to exit. Jan 14 01:22:36.131064 systemd-logind[1587]: Removed session 15. Jan 14 01:22:36.132425 kernel: audit: type=1106 audit(1768353756.104:801): pid=5109 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:36.132601 kernel: audit: type=1104 audit(1768353756.106:802): pid=5109 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:36.111000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.134:22-10.0.0.1:49686 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:36.397839 containerd[1611]: time="2026-01-14T01:22:36.397695520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:22:36.470590 containerd[1611]: time="2026-01-14T01:22:36.470340809Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:22:36.473114 containerd[1611]: time="2026-01-14T01:22:36.472659751Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:22:36.473114 containerd[1611]: time="2026-01-14T01:22:36.472752261Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:22:36.473290 kubelet[2787]: E0114 01:22:36.472893 2787 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:22:36.473290 kubelet[2787]: E0114 01:22:36.472941 2787 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:22:36.473290 kubelet[2787]: E0114 01:22:36.473143 2787 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xm9wn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-x2sz9_calico-system(09905137-6883-4a25-b76e-d0608b4b6347): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:22:36.476773 kubelet[2787]: E0114 01:22:36.476650 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x2sz9" podUID="09905137-6883-4a25-b76e-d0608b4b6347" Jan 14 01:22:39.395604 kubelet[2787]: E0114 01:22:39.395145 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:22:40.395045 kubelet[2787]: E0114 01:22:40.393464 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:22:40.395331 kubelet[2787]: E0114 01:22:40.395283 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:22:41.129944 systemd[1]: Started sshd@14-10.0.0.134:22-10.0.0.1:49692.service - OpenSSH per-connection server daemon (10.0.0.1:49692). Jan 14 01:22:41.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.134:22-10.0.0.1:49692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:41.136002 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:22:41.136102 kernel: audit: type=1130 audit(1768353761.130:804): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.134:22-10.0.0.1:49692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:41.288000 audit[5132]: USER_ACCT pid=5132 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:41.289642 sshd[5132]: Accepted publickey for core from 10.0.0.1 port 49692 ssh2: RSA SHA256:3qGrMVfuhKNIe5rlCK8c/D9IY3u9YaQGWBapsCdNUS0 Jan 14 01:22:41.293609 sshd-session[5132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:22:41.291000 audit[5132]: CRED_ACQ pid=5132 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:41.301900 systemd-logind[1587]: New session 16 of user core. Jan 14 01:22:41.312837 kernel: audit: type=1101 audit(1768353761.288:805): pid=5132 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:41.313059 kernel: audit: type=1103 audit(1768353761.291:806): pid=5132 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:41.313100 kernel: audit: type=1006 audit(1768353761.291:807): pid=5132 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 14 01:22:41.291000 audit[5132]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd6dbc74f0 a2=3 a3=0 items=0 ppid=1 pid=5132 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:41.334121 kernel: audit: type=1300 audit(1768353761.291:807): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd6dbc74f0 a2=3 a3=0 items=0 ppid=1 pid=5132 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:41.334273 kernel: audit: type=1327 audit(1768353761.291:807): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:41.291000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:41.335293 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 14 01:22:41.344000 audit[5132]: USER_START pid=5132 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:41.365106 kernel: audit: type=1105 audit(1768353761.344:808): pid=5132 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:41.365248 kernel: audit: type=1103 audit(1768353761.354:809): pid=5136 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:41.354000 audit[5136]: CRED_ACQ pid=5136 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:41.415428 containerd[1611]: time="2026-01-14T01:22:41.415190295Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:22:41.503576 containerd[1611]: time="2026-01-14T01:22:41.499665757Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:22:41.503576 containerd[1611]: time="2026-01-14T01:22:41.502796685Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:22:41.503576 containerd[1611]: time="2026-01-14T01:22:41.502949236Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:22:41.504204 kubelet[2787]: E0114 01:22:41.504160 2787 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:22:41.505011 kubelet[2787]: E0114 01:22:41.504845 2787 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:22:41.507928 kubelet[2787]: E0114 01:22:41.506702 2787 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pgsfz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-59555f9565-zxzlc_calico-system(7724ac30-d973-433e-90c7-10adfa17a249): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:22:41.509290 kubelet[2787]: E0114 01:22:41.509040 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59555f9565-zxzlc" podUID="7724ac30-d973-433e-90c7-10adfa17a249" Jan 14 01:22:41.509393 containerd[1611]: time="2026-01-14T01:22:41.509129370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:22:41.577387 containerd[1611]: time="2026-01-14T01:22:41.577314190Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:22:41.588614 containerd[1611]: time="2026-01-14T01:22:41.588119078Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:22:41.588965 containerd[1611]: time="2026-01-14T01:22:41.588280638Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:22:41.589453 kubelet[2787]: E0114 01:22:41.589352 2787 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:22:41.589453 kubelet[2787]: E0114 01:22:41.589443 2787 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:22:41.589786 kubelet[2787]: E0114 01:22:41.589716 2787 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hnvwj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c99cb9d5d-kj4q4_calico-apiserver(2e6b76b0-bbf3-4bda-8c0a-ac8224558858): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:22:41.591448 kubelet[2787]: E0114 01:22:41.591375 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c99cb9d5d-kj4q4" podUID="2e6b76b0-bbf3-4bda-8c0a-ac8224558858" Jan 14 01:22:41.599604 sshd[5136]: Connection closed by 10.0.0.1 port 49692 Jan 14 01:22:41.601419 sshd-session[5132]: pam_unix(sshd:session): session closed for user core Jan 14 01:22:41.603000 audit[5132]: USER_END pid=5132 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:41.612189 systemd[1]: sshd@14-10.0.0.134:22-10.0.0.1:49692.service: Deactivated successfully. Jan 14 01:22:41.615765 systemd[1]: session-16.scope: Deactivated successfully. Jan 14 01:22:41.617594 systemd-logind[1587]: Session 16 logged out. Waiting for processes to exit. Jan 14 01:22:41.620069 systemd-logind[1587]: Removed session 16. Jan 14 01:22:41.621598 kernel: audit: type=1106 audit(1768353761.603:810): pid=5132 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:41.603000 audit[5132]: CRED_DISP pid=5132 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:41.636645 kernel: audit: type=1104 audit(1768353761.603:811): pid=5132 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:41.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.134:22-10.0.0.1:49692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:42.398570 containerd[1611]: time="2026-01-14T01:22:42.398301823Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:22:42.472872 containerd[1611]: time="2026-01-14T01:22:42.472745447Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:22:42.478643 containerd[1611]: time="2026-01-14T01:22:42.478452123Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:22:42.478790 containerd[1611]: time="2026-01-14T01:22:42.478572129Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:22:42.483053 kubelet[2787]: E0114 01:22:42.478930 2787 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:22:42.483053 kubelet[2787]: E0114 01:22:42.479024 2787 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:22:42.483053 kubelet[2787]: E0114 01:22:42.479181 2787 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a68a300fc64e45a2b1bba454e6e6db2f,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wcbl8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-c69b4ddbc-mp7cc_calico-system(0c6080b7-a312-4044-afca-8c80fd4d65bc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:22:42.483407 containerd[1611]: time="2026-01-14T01:22:42.482037251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:22:42.546375 containerd[1611]: time="2026-01-14T01:22:42.546264102Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:22:42.549463 containerd[1611]: time="2026-01-14T01:22:42.549332831Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:22:42.549463 containerd[1611]: time="2026-01-14T01:22:42.549378771Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:22:42.549907 kubelet[2787]: E0114 01:22:42.549757 2787 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:22:42.549907 kubelet[2787]: E0114 01:22:42.549883 2787 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:22:42.550674 kubelet[2787]: E0114 01:22:42.550067 2787 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wcbl8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-c69b4ddbc-mp7cc_calico-system(0c6080b7-a312-4044-afca-8c80fd4d65bc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:22:42.551548 kubelet[2787]: E0114 01:22:42.551394 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c69b4ddbc-mp7cc" podUID="0c6080b7-a312-4044-afca-8c80fd4d65bc" Jan 14 01:22:43.396666 containerd[1611]: time="2026-01-14T01:22:43.396008301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:22:43.460808 containerd[1611]: time="2026-01-14T01:22:43.460725566Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:22:43.463368 containerd[1611]: time="2026-01-14T01:22:43.463229518Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:22:43.463368 containerd[1611]: time="2026-01-14T01:22:43.463304312Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:22:43.463704 kubelet[2787]: E0114 01:22:43.463650 2787 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:22:43.463775 kubelet[2787]: E0114 01:22:43.463717 2787 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:22:43.463998 kubelet[2787]: E0114 01:22:43.463915 2787 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sbg5p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c99cb9d5d-jz6rb_calico-apiserver(1bd888e4-98c6-46dd-883e-12946740dfe2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:22:43.465764 kubelet[2787]: E0114 01:22:43.465683 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c99cb9d5d-jz6rb" podUID="1bd888e4-98c6-46dd-883e-12946740dfe2" Jan 14 01:22:46.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.134:22-10.0.0.1:60044 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:46.636007 systemd[1]: Started sshd@15-10.0.0.134:22-10.0.0.1:60044.service - OpenSSH per-connection server daemon (10.0.0.1:60044). Jan 14 01:22:46.639205 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:22:46.639283 kernel: audit: type=1130 audit(1768353766.635:813): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.134:22-10.0.0.1:60044 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:46.844000 audit[5156]: USER_ACCT pid=5156 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:46.847579 sshd[5156]: Accepted publickey for core from 10.0.0.1 port 60044 ssh2: RSA SHA256:3qGrMVfuhKNIe5rlCK8c/D9IY3u9YaQGWBapsCdNUS0 Jan 14 01:22:46.866708 kernel: audit: type=1101 audit(1768353766.844:814): pid=5156 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:46.866000 audit[5156]: CRED_ACQ pid=5156 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:46.868837 sshd-session[5156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:22:46.885162 systemd-logind[1587]: New session 17 of user core. Jan 14 01:22:46.898869 kernel: audit: type=1103 audit(1768353766.866:815): pid=5156 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:46.898988 kernel: audit: type=1006 audit(1768353766.866:816): pid=5156 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 14 01:22:46.899035 kernel: audit: type=1300 audit(1768353766.866:816): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb246e1b0 a2=3 a3=0 items=0 ppid=1 pid=5156 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:46.866000 audit[5156]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb246e1b0 a2=3 a3=0 items=0 ppid=1 pid=5156 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:46.866000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:46.926806 kernel: audit: type=1327 audit(1768353766.866:816): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:46.933581 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 14 01:22:46.943000 audit[5156]: USER_START pid=5156 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:46.953000 audit[5160]: CRED_ACQ pid=5160 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:46.986076 kernel: audit: type=1105 audit(1768353766.943:817): pid=5156 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:46.986658 kernel: audit: type=1103 audit(1768353766.953:818): pid=5160 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:47.141952 sshd[5160]: Connection closed by 10.0.0.1 port 60044 Jan 14 01:22:47.141233 sshd-session[5156]: pam_unix(sshd:session): session closed for user core Jan 14 01:22:47.146000 audit[5156]: USER_END pid=5156 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:47.153096 systemd[1]: sshd@15-10.0.0.134:22-10.0.0.1:60044.service: Deactivated successfully. Jan 14 01:22:47.157407 systemd[1]: session-17.scope: Deactivated successfully. Jan 14 01:22:47.161863 systemd-logind[1587]: Session 17 logged out. Waiting for processes to exit. Jan 14 01:22:47.164920 systemd-logind[1587]: Removed session 17. Jan 14 01:22:47.172590 kernel: audit: type=1106 audit(1768353767.146:819): pid=5156 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:47.172693 kernel: audit: type=1104 audit(1768353767.146:820): pid=5156 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:47.146000 audit[5156]: CRED_DISP pid=5156 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:47.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.134:22-10.0.0.1:60044 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:47.395124 kubelet[2787]: E0114 01:22:47.393395 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:22:48.394626 kubelet[2787]: E0114 01:22:48.394278 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:22:49.398222 kubelet[2787]: E0114 01:22:49.398091 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x2sz9" podUID="09905137-6883-4a25-b76e-d0608b4b6347" Jan 14 01:22:50.409203 containerd[1611]: time="2026-01-14T01:22:50.409164243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:22:50.497170 containerd[1611]: time="2026-01-14T01:22:50.497086125Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:22:50.501722 containerd[1611]: time="2026-01-14T01:22:50.501563295Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:22:50.501722 containerd[1611]: time="2026-01-14T01:22:50.501663441Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:22:50.502134 kubelet[2787]: E0114 01:22:50.502065 2787 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:22:50.502134 kubelet[2787]: E0114 01:22:50.502124 2787 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:22:50.502823 kubelet[2787]: E0114 01:22:50.502280 2787 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d9ggb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qqxnp_calico-system(ba3d93c2-390e-4ba5-bb19-4864194c73f7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:22:50.504624 containerd[1611]: time="2026-01-14T01:22:50.504482679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:22:50.584155 containerd[1611]: time="2026-01-14T01:22:50.582489319Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:22:50.588405 containerd[1611]: time="2026-01-14T01:22:50.587923449Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:22:50.588405 containerd[1611]: time="2026-01-14T01:22:50.588040486Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:22:50.589256 kubelet[2787]: E0114 01:22:50.589175 2787 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:22:50.589256 kubelet[2787]: E0114 01:22:50.589245 2787 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:22:50.590006 kubelet[2787]: E0114 01:22:50.589825 2787 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d9ggb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qqxnp_calico-system(ba3d93c2-390e-4ba5-bb19-4864194c73f7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:22:50.591962 kubelet[2787]: E0114 01:22:50.591905 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qqxnp" podUID="ba3d93c2-390e-4ba5-bb19-4864194c73f7" Jan 14 01:22:52.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.134:22-10.0.0.1:60046 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:52.160932 systemd[1]: Started sshd@16-10.0.0.134:22-10.0.0.1:60046.service - OpenSSH per-connection server daemon (10.0.0.1:60046). Jan 14 01:22:52.164225 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:22:52.164359 kernel: audit: type=1130 audit(1768353772.160:822): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.134:22-10.0.0.1:60046 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:52.257657 sshd[5173]: Accepted publickey for core from 10.0.0.1 port 60046 ssh2: RSA SHA256:3qGrMVfuhKNIe5rlCK8c/D9IY3u9YaQGWBapsCdNUS0 Jan 14 01:22:52.256000 audit[5173]: USER_ACCT pid=5173 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:52.261466 sshd-session[5173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:22:52.257000 audit[5173]: CRED_ACQ pid=5173 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:52.274234 systemd-logind[1587]: New session 18 of user core. Jan 14 01:22:52.285870 kernel: audit: type=1101 audit(1768353772.256:823): pid=5173 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:52.285968 kernel: audit: type=1103 audit(1768353772.257:824): pid=5173 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:52.286196 kernel: audit: type=1006 audit(1768353772.257:825): pid=5173 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 14 01:22:52.295879 kernel: audit: type=1300 audit(1768353772.257:825): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff7127830 a2=3 a3=0 items=0 ppid=1 pid=5173 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:52.257000 audit[5173]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff7127830 a2=3 a3=0 items=0 ppid=1 pid=5173 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:52.312884 kernel: audit: type=1327 audit(1768353772.257:825): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:52.257000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:52.324987 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 14 01:22:52.333000 audit[5173]: USER_START pid=5173 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:52.337000 audit[5177]: CRED_ACQ pid=5177 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:52.353794 kernel: audit: type=1105 audit(1768353772.333:826): pid=5173 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:52.353880 kernel: audit: type=1103 audit(1768353772.337:827): pid=5177 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:52.395005 kubelet[2787]: E0114 01:22:52.394903 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c99cb9d5d-kj4q4" podUID="2e6b76b0-bbf3-4bda-8c0a-ac8224558858" Jan 14 01:22:52.498636 sshd[5177]: Connection closed by 10.0.0.1 port 60046 Jan 14 01:22:52.499177 sshd-session[5173]: pam_unix(sshd:session): session closed for user core Jan 14 01:22:52.503000 audit[5173]: USER_END pid=5173 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:52.508798 systemd[1]: sshd@16-10.0.0.134:22-10.0.0.1:60046.service: Deactivated successfully. Jan 14 01:22:52.513304 systemd[1]: session-18.scope: Deactivated successfully. Jan 14 01:22:52.516853 systemd-logind[1587]: Session 18 logged out. Waiting for processes to exit. Jan 14 01:22:52.519582 systemd-logind[1587]: Removed session 18. Jan 14 01:22:52.504000 audit[5173]: CRED_DISP pid=5173 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:52.543425 kernel: audit: type=1106 audit(1768353772.503:828): pid=5173 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:52.543665 kernel: audit: type=1104 audit(1768353772.504:829): pid=5173 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:52.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.134:22-10.0.0.1:60046 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:55.396458 kubelet[2787]: E0114 01:22:55.396230 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c99cb9d5d-jz6rb" podUID="1bd888e4-98c6-46dd-883e-12946740dfe2" Jan 14 01:22:56.421789 kubelet[2787]: E0114 01:22:56.421126 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59555f9565-zxzlc" podUID="7724ac30-d973-433e-90c7-10adfa17a249" Jan 14 01:22:57.410662 kubelet[2787]: E0114 01:22:57.409776 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c69b4ddbc-mp7cc" podUID="0c6080b7-a312-4044-afca-8c80fd4d65bc" Jan 14 01:22:57.519645 systemd[1]: Started sshd@17-10.0.0.134:22-10.0.0.1:51486.service - OpenSSH per-connection server daemon (10.0.0.1:51486). Jan 14 01:22:57.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.134:22-10.0.0.1:51486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:57.527611 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:22:57.527666 kernel: audit: type=1130 audit(1768353777.519:831): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.134:22-10.0.0.1:51486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:57.690000 audit[5193]: USER_ACCT pid=5193 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:57.697366 sshd[5193]: Accepted publickey for core from 10.0.0.1 port 51486 ssh2: RSA SHA256:3qGrMVfuhKNIe5rlCK8c/D9IY3u9YaQGWBapsCdNUS0 Jan 14 01:22:57.701428 sshd-session[5193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:22:57.698000 audit[5193]: CRED_ACQ pid=5193 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:57.731456 systemd-logind[1587]: New session 19 of user core. Jan 14 01:22:57.735126 kernel: audit: type=1101 audit(1768353777.690:832): pid=5193 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:57.735243 kernel: audit: type=1103 audit(1768353777.698:833): pid=5193 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:57.755236 kernel: audit: type=1006 audit(1768353777.699:834): pid=5193 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 14 01:22:57.755326 kernel: audit: type=1300 audit(1768353777.699:834): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd06303540 a2=3 a3=0 items=0 ppid=1 pid=5193 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:57.699000 audit[5193]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd06303540 a2=3 a3=0 items=0 ppid=1 pid=5193 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:57.699000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:57.774230 kernel: audit: type=1327 audit(1768353777.699:834): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:57.773035 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 14 01:22:57.787000 audit[5193]: USER_START pid=5193 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:57.796000 audit[5197]: CRED_ACQ pid=5197 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:57.824980 kernel: audit: type=1105 audit(1768353777.787:835): pid=5193 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:57.825093 kernel: audit: type=1103 audit(1768353777.796:836): pid=5197 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:57.991778 sshd[5197]: Connection closed by 10.0.0.1 port 51486 Jan 14 01:22:57.995576 sshd-session[5193]: pam_unix(sshd:session): session closed for user core Jan 14 01:22:57.996000 audit[5193]: USER_END pid=5193 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:57.996000 audit[5193]: CRED_DISP pid=5193 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:58.022913 kernel: audit: type=1106 audit(1768353777.996:837): pid=5193 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:58.023037 kernel: audit: type=1104 audit(1768353777.996:838): pid=5193 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:58.033493 systemd[1]: sshd@17-10.0.0.134:22-10.0.0.1:51486.service: Deactivated successfully. Jan 14 01:22:58.036433 systemd[1]: session-19.scope: Deactivated successfully. Jan 14 01:22:58.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.134:22-10.0.0.1:51486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:58.038432 systemd-logind[1587]: Session 19 logged out. Waiting for processes to exit. Jan 14 01:22:58.046393 systemd[1]: Started sshd@18-10.0.0.134:22-10.0.0.1:51494.service - OpenSSH per-connection server daemon (10.0.0.1:51494). Jan 14 01:22:58.048101 systemd-logind[1587]: Removed session 19. Jan 14 01:22:58.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.134:22-10.0.0.1:51494 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:58.167000 audit[5210]: USER_ACCT pid=5210 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:58.170998 sshd[5210]: Accepted publickey for core from 10.0.0.1 port 51494 ssh2: RSA SHA256:3qGrMVfuhKNIe5rlCK8c/D9IY3u9YaQGWBapsCdNUS0 Jan 14 01:22:58.176000 audit[5210]: CRED_ACQ pid=5210 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:58.178000 audit[5210]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd19c2b390 a2=3 a3=0 items=0 ppid=1 pid=5210 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:58.178000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:58.180708 sshd-session[5210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:22:58.192236 systemd-logind[1587]: New session 20 of user core. Jan 14 01:22:58.210038 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 14 01:22:58.219000 audit[5210]: USER_START pid=5210 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:58.225000 audit[5214]: CRED_ACQ pid=5214 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:58.757609 sshd[5214]: Connection closed by 10.0.0.1 port 51494 Jan 14 01:22:58.757350 sshd-session[5210]: pam_unix(sshd:session): session closed for user core Jan 14 01:22:58.760000 audit[5210]: USER_END pid=5210 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:58.761000 audit[5210]: CRED_DISP pid=5210 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:58.775417 systemd[1]: sshd@18-10.0.0.134:22-10.0.0.1:51494.service: Deactivated successfully. Jan 14 01:22:58.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.134:22-10.0.0.1:51494 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:58.784388 systemd[1]: session-20.scope: Deactivated successfully. Jan 14 01:22:58.789994 systemd-logind[1587]: Session 20 logged out. Waiting for processes to exit. Jan 14 01:22:58.795998 systemd[1]: Started sshd@19-10.0.0.134:22-10.0.0.1:51506.service - OpenSSH per-connection server daemon (10.0.0.1:51506). Jan 14 01:22:58.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.134:22-10.0.0.1:51506 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:58.799488 systemd-logind[1587]: Removed session 20. Jan 14 01:22:58.914056 sshd[5251]: Accepted publickey for core from 10.0.0.1 port 51506 ssh2: RSA SHA256:3qGrMVfuhKNIe5rlCK8c/D9IY3u9YaQGWBapsCdNUS0 Jan 14 01:22:58.912000 audit[5251]: USER_ACCT pid=5251 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:58.914000 audit[5251]: CRED_ACQ pid=5251 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:58.915000 audit[5251]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe25da40e0 a2=3 a3=0 items=0 ppid=1 pid=5251 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:58.915000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:58.917277 sshd-session[5251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:22:58.936415 systemd-logind[1587]: New session 21 of user core. Jan 14 01:22:58.951113 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 14 01:22:58.958000 audit[5251]: USER_START pid=5251 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:22:58.964000 audit[5258]: CRED_ACQ pid=5258 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:00.006000 audit[5273]: NETFILTER_CFG table=filter:144 family=2 entries=26 op=nft_register_rule pid=5273 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:23:00.006000 audit[5273]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc5903bbc0 a2=0 a3=7ffc5903bbac items=0 ppid=2947 pid=5273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:00.006000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:23:00.022000 audit[5273]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=5273 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:23:00.022000 audit[5273]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc5903bbc0 a2=0 a3=0 items=0 ppid=2947 pid=5273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:00.022000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:23:00.033934 sshd[5258]: Connection closed by 10.0.0.1 port 51506 Jan 14 01:23:00.035809 sshd-session[5251]: pam_unix(sshd:session): session closed for user core Jan 14 01:23:00.037000 audit[5251]: USER_END pid=5251 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:00.038000 audit[5251]: CRED_DISP pid=5251 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:00.054993 systemd[1]: sshd@19-10.0.0.134:22-10.0.0.1:51506.service: Deactivated successfully. Jan 14 01:23:00.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.134:22-10.0.0.1:51506 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:00.059264 systemd[1]: session-21.scope: Deactivated successfully. Jan 14 01:23:00.061688 systemd-logind[1587]: Session 21 logged out. Waiting for processes to exit. Jan 14 01:23:00.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.134:22-10.0.0.1:51514 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:00.067958 systemd[1]: Started sshd@20-10.0.0.134:22-10.0.0.1:51514.service - OpenSSH per-connection server daemon (10.0.0.1:51514). Jan 14 01:23:00.068000 audit[5277]: NETFILTER_CFG table=filter:146 family=2 entries=38 op=nft_register_rule pid=5277 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:23:00.068000 audit[5277]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc8883c730 a2=0 a3=7ffc8883c71c items=0 ppid=2947 pid=5277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:00.068000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:23:00.072014 systemd-logind[1587]: Removed session 21. Jan 14 01:23:00.076000 audit[5277]: NETFILTER_CFG table=nat:147 family=2 entries=20 op=nft_register_rule pid=5277 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:23:00.076000 audit[5277]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc8883c730 a2=0 a3=0 items=0 ppid=2947 pid=5277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:00.076000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:23:00.177283 sshd[5280]: Accepted publickey for core from 10.0.0.1 port 51514 ssh2: RSA SHA256:3qGrMVfuhKNIe5rlCK8c/D9IY3u9YaQGWBapsCdNUS0 Jan 14 01:23:00.176000 audit[5280]: USER_ACCT pid=5280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:00.178000 audit[5280]: CRED_ACQ pid=5280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:00.178000 audit[5280]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffde77759c0 a2=3 a3=0 items=0 ppid=1 pid=5280 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:00.178000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:23:00.180903 sshd-session[5280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:23:00.197430 systemd-logind[1587]: New session 22 of user core. Jan 14 01:23:00.213055 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 14 01:23:00.220000 audit[5280]: USER_START pid=5280 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:00.227000 audit[5284]: CRED_ACQ pid=5284 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:00.612759 sshd[5284]: Connection closed by 10.0.0.1 port 51514 Jan 14 01:23:00.614046 sshd-session[5280]: pam_unix(sshd:session): session closed for user core Jan 14 01:23:00.618000 audit[5280]: USER_END pid=5280 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:00.618000 audit[5280]: CRED_DISP pid=5280 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:00.640037 systemd[1]: sshd@20-10.0.0.134:22-10.0.0.1:51514.service: Deactivated successfully. Jan 14 01:23:00.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.134:22-10.0.0.1:51514 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:00.648150 systemd[1]: session-22.scope: Deactivated successfully. Jan 14 01:23:00.654447 systemd-logind[1587]: Session 22 logged out. Waiting for processes to exit. Jan 14 01:23:00.664465 systemd[1]: Started sshd@21-10.0.0.134:22-10.0.0.1:51526.service - OpenSSH per-connection server daemon (10.0.0.1:51526). Jan 14 01:23:00.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.134:22-10.0.0.1:51526 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:00.669395 systemd-logind[1587]: Removed session 22. Jan 14 01:23:00.781000 audit[5296]: USER_ACCT pid=5296 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:00.782366 sshd[5296]: Accepted publickey for core from 10.0.0.1 port 51526 ssh2: RSA SHA256:3qGrMVfuhKNIe5rlCK8c/D9IY3u9YaQGWBapsCdNUS0 Jan 14 01:23:00.784000 audit[5296]: CRED_ACQ pid=5296 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:00.784000 audit[5296]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb5b0c890 a2=3 a3=0 items=0 ppid=1 pid=5296 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:00.784000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:23:00.786936 sshd-session[5296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:23:00.804489 systemd-logind[1587]: New session 23 of user core. Jan 14 01:23:00.816959 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 14 01:23:00.821000 audit[5296]: USER_START pid=5296 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:00.827000 audit[5300]: CRED_ACQ pid=5300 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:00.994237 sshd[5300]: Connection closed by 10.0.0.1 port 51526 Jan 14 01:23:00.996855 sshd-session[5296]: pam_unix(sshd:session): session closed for user core Jan 14 01:23:01.000000 audit[5296]: USER_END pid=5296 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:01.002000 audit[5296]: CRED_DISP pid=5296 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:01.009921 systemd[1]: sshd@21-10.0.0.134:22-10.0.0.1:51526.service: Deactivated successfully. Jan 14 01:23:01.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.134:22-10.0.0.1:51526 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:01.019249 systemd[1]: session-23.scope: Deactivated successfully. Jan 14 01:23:01.029253 systemd-logind[1587]: Session 23 logged out. Waiting for processes to exit. Jan 14 01:23:01.035483 systemd-logind[1587]: Removed session 23. Jan 14 01:23:03.399032 kubelet[2787]: E0114 01:23:03.398959 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c99cb9d5d-kj4q4" podUID="2e6b76b0-bbf3-4bda-8c0a-ac8224558858" Jan 14 01:23:04.397606 kubelet[2787]: E0114 01:23:04.397360 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qqxnp" podUID="ba3d93c2-390e-4ba5-bb19-4864194c73f7" Jan 14 01:23:04.398489 kubelet[2787]: E0114 01:23:04.398447 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x2sz9" podUID="09905137-6883-4a25-b76e-d0608b4b6347" Jan 14 01:23:05.394146 kubelet[2787]: E0114 01:23:05.393974 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:23:06.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.134:22-10.0.0.1:55174 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:06.026893 systemd[1]: Started sshd@22-10.0.0.134:22-10.0.0.1:55174.service - OpenSSH per-connection server daemon (10.0.0.1:55174). Jan 14 01:23:06.035891 kernel: kauditd_printk_skb: 57 callbacks suppressed Jan 14 01:23:06.036007 kernel: audit: type=1130 audit(1768353786.029:880): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.134:22-10.0.0.1:55174 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:06.154491 sshd[5313]: Accepted publickey for core from 10.0.0.1 port 55174 ssh2: RSA SHA256:3qGrMVfuhKNIe5rlCK8c/D9IY3u9YaQGWBapsCdNUS0 Jan 14 01:23:06.153000 audit[5313]: USER_ACCT pid=5313 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:06.168678 systemd-logind[1587]: New session 24 of user core. Jan 14 01:23:06.155000 audit[5313]: CRED_ACQ pid=5313 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:06.158759 sshd-session[5313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:23:06.186319 kernel: audit: type=1101 audit(1768353786.153:881): pid=5313 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:06.186434 kernel: audit: type=1103 audit(1768353786.155:882): pid=5313 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:06.187711 kernel: audit: type=1006 audit(1768353786.155:883): pid=5313 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 14 01:23:06.155000 audit[5313]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe65bfbbc0 a2=3 a3=0 items=0 ppid=1 pid=5313 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:06.202001 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 14 01:23:06.220721 kernel: audit: type=1300 audit(1768353786.155:883): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe65bfbbc0 a2=3 a3=0 items=0 ppid=1 pid=5313 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:06.220908 kernel: audit: type=1327 audit(1768353786.155:883): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:23:06.155000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:23:06.205000 audit[5313]: USER_START pid=5313 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:06.243740 kernel: audit: type=1105 audit(1768353786.205:884): pid=5313 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:06.243873 kernel: audit: type=1103 audit(1768353786.210:885): pid=5317 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:06.210000 audit[5317]: CRED_ACQ pid=5317 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:06.353811 sshd[5317]: Connection closed by 10.0.0.1 port 55174 Jan 14 01:23:06.354294 sshd-session[5313]: pam_unix(sshd:session): session closed for user core Jan 14 01:23:06.357000 audit[5313]: USER_END pid=5313 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:06.360000 audit[5313]: CRED_DISP pid=5313 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:06.384491 systemd[1]: sshd@22-10.0.0.134:22-10.0.0.1:55174.service: Deactivated successfully. Jan 14 01:23:06.388740 systemd[1]: session-24.scope: Deactivated successfully. Jan 14 01:23:06.393975 systemd-logind[1587]: Session 24 logged out. Waiting for processes to exit. Jan 14 01:23:06.398384 kernel: audit: type=1106 audit(1768353786.357:886): pid=5313 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:06.398448 kernel: audit: type=1104 audit(1768353786.360:887): pid=5313 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:06.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.134:22-10.0.0.1:55174 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:06.401335 systemd-logind[1587]: Removed session 24. Jan 14 01:23:09.400439 kubelet[2787]: E0114 01:23:09.400206 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c99cb9d5d-jz6rb" podUID="1bd888e4-98c6-46dd-883e-12946740dfe2" Jan 14 01:23:09.401354 kubelet[2787]: E0114 01:23:09.401259 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c69b4ddbc-mp7cc" podUID="0c6080b7-a312-4044-afca-8c80fd4d65bc" Jan 14 01:23:10.999856 kernel: hrtimer: interrupt took 8127097 ns Jan 14 01:23:11.392419 systemd[1]: Started sshd@23-10.0.0.134:22-10.0.0.1:55176.service - OpenSSH per-connection server daemon (10.0.0.1:55176). Jan 14 01:23:11.400249 kubelet[2787]: E0114 01:23:11.398906 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59555f9565-zxzlc" podUID="7724ac30-d973-433e-90c7-10adfa17a249" Jan 14 01:23:11.406288 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:23:11.406361 kernel: audit: type=1130 audit(1768353791.399:889): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.134:22-10.0.0.1:55176 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:11.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.134:22-10.0.0.1:55176 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:11.568000 audit[5331]: USER_ACCT pid=5331 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:11.569711 sshd[5331]: Accepted publickey for core from 10.0.0.1 port 55176 ssh2: RSA SHA256:3qGrMVfuhKNIe5rlCK8c/D9IY3u9YaQGWBapsCdNUS0 Jan 14 01:23:11.589722 kernel: audit: type=1101 audit(1768353791.568:890): pid=5331 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:11.589913 kernel: audit: type=1103 audit(1768353791.588:891): pid=5331 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:11.588000 audit[5331]: CRED_ACQ pid=5331 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:11.591485 sshd-session[5331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:23:11.616983 systemd-logind[1587]: New session 25 of user core. Jan 14 01:23:11.617407 kernel: audit: type=1006 audit(1768353791.589:892): pid=5331 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 14 01:23:11.589000 audit[5331]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc27ed7f0 a2=3 a3=0 items=0 ppid=1 pid=5331 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:11.589000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:23:11.647273 kernel: audit: type=1300 audit(1768353791.589:892): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc27ed7f0 a2=3 a3=0 items=0 ppid=1 pid=5331 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:11.647413 kernel: audit: type=1327 audit(1768353791.589:892): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:23:11.648307 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 14 01:23:11.657000 audit[5331]: USER_START pid=5331 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:11.663000 audit[5335]: CRED_ACQ pid=5335 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:11.700092 kernel: audit: type=1105 audit(1768353791.657:893): pid=5331 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:11.700266 kernel: audit: type=1103 audit(1768353791.663:894): pid=5335 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:11.840000 audit[5345]: NETFILTER_CFG table=filter:148 family=2 entries=26 op=nft_register_rule pid=5345 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:23:11.840000 audit[5345]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe516855a0 a2=0 a3=7ffe5168558c items=0 ppid=2947 pid=5345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:11.885819 kernel: audit: type=1325 audit(1768353791.840:895): table=filter:148 family=2 entries=26 op=nft_register_rule pid=5345 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:23:11.886017 kernel: audit: type=1300 audit(1768353791.840:895): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe516855a0 a2=0 a3=7ffe5168558c items=0 ppid=2947 pid=5345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:11.840000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:23:11.898000 audit[5345]: NETFILTER_CFG table=nat:149 family=2 entries=104 op=nft_register_chain pid=5345 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:23:11.898000 audit[5345]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffe516855a0 a2=0 a3=7ffe5168558c items=0 ppid=2947 pid=5345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:11.898000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:23:11.916263 sshd[5335]: Connection closed by 10.0.0.1 port 55176 Jan 14 01:23:11.917786 sshd-session[5331]: pam_unix(sshd:session): session closed for user core Jan 14 01:23:11.929000 audit[5331]: USER_END pid=5331 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:11.929000 audit[5331]: CRED_DISP pid=5331 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:11.945334 systemd[1]: sshd@23-10.0.0.134:22-10.0.0.1:55176.service: Deactivated successfully. Jan 14 01:23:11.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.134:22-10.0.0.1:55176 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:11.959831 systemd[1]: session-25.scope: Deactivated successfully. Jan 14 01:23:11.977243 systemd-logind[1587]: Session 25 logged out. Waiting for processes to exit. Jan 14 01:23:11.981739 systemd-logind[1587]: Removed session 25. Jan 14 01:23:14.400877 kubelet[2787]: E0114 01:23:14.400297 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c99cb9d5d-kj4q4" podUID="2e6b76b0-bbf3-4bda-8c0a-ac8224558858" Jan 14 01:23:16.976659 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 14 01:23:16.977328 kernel: audit: type=1130 audit(1768353796.963:900): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.134:22-10.0.0.1:34456 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:16.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.134:22-10.0.0.1:34456 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:16.964882 systemd[1]: Started sshd@24-10.0.0.134:22-10.0.0.1:34456.service - OpenSSH per-connection server daemon (10.0.0.1:34456). Jan 14 01:23:17.206615 sshd[5352]: Accepted publickey for core from 10.0.0.1 port 34456 ssh2: RSA SHA256:3qGrMVfuhKNIe5rlCK8c/D9IY3u9YaQGWBapsCdNUS0 Jan 14 01:23:17.204000 audit[5352]: USER_ACCT pid=5352 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:17.213010 sshd-session[5352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:23:17.242463 kernel: audit: type=1101 audit(1768353797.204:901): pid=5352 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:17.205000 audit[5352]: CRED_ACQ pid=5352 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:17.290488 systemd-logind[1587]: New session 26 of user core. Jan 14 01:23:17.295217 kernel: audit: type=1103 audit(1768353797.205:902): pid=5352 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:17.295291 kernel: audit: type=1006 audit(1768353797.209:903): pid=5352 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 14 01:23:17.295345 kernel: audit: type=1300 audit(1768353797.209:903): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc8983d980 a2=3 a3=0 items=0 ppid=1 pid=5352 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:17.209000 audit[5352]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc8983d980 a2=3 a3=0 items=0 ppid=1 pid=5352 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:17.209000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:23:17.331260 kernel: audit: type=1327 audit(1768353797.209:903): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:23:17.330915 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 14 01:23:17.360000 audit[5352]: USER_START pid=5352 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:17.371000 audit[5356]: CRED_ACQ pid=5356 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:17.413634 kernel: audit: type=1105 audit(1768353797.360:904): pid=5352 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:17.413734 kernel: audit: type=1103 audit(1768353797.371:905): pid=5356 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:17.639340 sshd[5356]: Connection closed by 10.0.0.1 port 34456 Jan 14 01:23:17.641751 sshd-session[5352]: pam_unix(sshd:session): session closed for user core Jan 14 01:23:17.652000 audit[5352]: USER_END pid=5352 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:17.677294 systemd[1]: sshd@24-10.0.0.134:22-10.0.0.1:34456.service: Deactivated successfully. Jan 14 01:23:17.653000 audit[5352]: CRED_DISP pid=5352 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:17.695870 systemd[1]: session-26.scope: Deactivated successfully. Jan 14 01:23:17.702805 systemd-logind[1587]: Session 26 logged out. Waiting for processes to exit. Jan 14 01:23:17.705581 systemd-logind[1587]: Removed session 26. Jan 14 01:23:17.714043 kernel: audit: type=1106 audit(1768353797.652:906): pid=5352 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:17.714222 kernel: audit: type=1104 audit(1768353797.653:907): pid=5352 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:17.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.134:22-10.0.0.1:34456 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:19.396784 containerd[1611]: time="2026-01-14T01:23:19.396318467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:23:19.400849 kubelet[2787]: E0114 01:23:19.399674 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qqxnp" podUID="ba3d93c2-390e-4ba5-bb19-4864194c73f7" Jan 14 01:23:19.489754 containerd[1611]: time="2026-01-14T01:23:19.489490334Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:23:19.498590 containerd[1611]: time="2026-01-14T01:23:19.496039230Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:23:19.498590 containerd[1611]: time="2026-01-14T01:23:19.496264757Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:23:19.498770 kubelet[2787]: E0114 01:23:19.498005 2787 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:23:19.498770 kubelet[2787]: E0114 01:23:19.498081 2787 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:23:19.498770 kubelet[2787]: E0114 01:23:19.498362 2787 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xm9wn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-x2sz9_calico-system(09905137-6883-4a25-b76e-d0608b4b6347): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:23:19.505779 kubelet[2787]: E0114 01:23:19.505661 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x2sz9" podUID="09905137-6883-4a25-b76e-d0608b4b6347" Jan 14 01:23:20.408399 kubelet[2787]: E0114 01:23:20.408052 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c69b4ddbc-mp7cc" podUID="0c6080b7-a312-4044-afca-8c80fd4d65bc" Jan 14 01:23:21.394398 kubelet[2787]: E0114 01:23:21.394236 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:23:22.396588 kubelet[2787]: E0114 01:23:22.396254 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c99cb9d5d-jz6rb" podUID="1bd888e4-98c6-46dd-883e-12946740dfe2" Jan 14 01:23:22.656864 systemd[1]: Started sshd@25-10.0.0.134:22-10.0.0.1:32774.service - OpenSSH per-connection server daemon (10.0.0.1:32774). Jan 14 01:23:22.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.134:22-10.0.0.1:32774 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:22.661136 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:23:22.661276 kernel: audit: type=1130 audit(1768353802.655:909): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.134:22-10.0.0.1:32774 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:22.832000 audit[5376]: USER_ACCT pid=5376 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:22.834761 sshd[5376]: Accepted publickey for core from 10.0.0.1 port 32774 ssh2: RSA SHA256:3qGrMVfuhKNIe5rlCK8c/D9IY3u9YaQGWBapsCdNUS0 Jan 14 01:23:22.838351 sshd-session[5376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:23:22.834000 audit[5376]: CRED_ACQ pid=5376 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:22.849802 systemd-logind[1587]: New session 27 of user core. Jan 14 01:23:22.858990 kernel: audit: type=1101 audit(1768353802.832:910): pid=5376 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:22.859193 kernel: audit: type=1103 audit(1768353802.834:911): pid=5376 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:22.859263 kernel: audit: type=1006 audit(1768353802.835:912): pid=5376 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 14 01:23:22.835000 audit[5376]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffca15b220 a2=3 a3=0 items=0 ppid=1 pid=5376 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:22.880586 kernel: audit: type=1300 audit(1768353802.835:912): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffca15b220 a2=3 a3=0 items=0 ppid=1 pid=5376 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:22.880718 kernel: audit: type=1327 audit(1768353802.835:912): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:23:22.835000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:23:22.889898 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 14 01:23:22.893000 audit[5376]: USER_START pid=5376 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:22.911711 kernel: audit: type=1105 audit(1768353802.893:913): pid=5376 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:22.897000 audit[5380]: CRED_ACQ pid=5380 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:22.925637 kernel: audit: type=1103 audit(1768353802.897:914): pid=5380 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:23.075434 sshd[5380]: Connection closed by 10.0.0.1 port 32774 Jan 14 01:23:23.076753 sshd-session[5376]: pam_unix(sshd:session): session closed for user core Jan 14 01:23:23.077000 audit[5376]: USER_END pid=5376 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:23.085989 systemd-logind[1587]: Session 27 logged out. Waiting for processes to exit. Jan 14 01:23:23.086346 systemd[1]: sshd@25-10.0.0.134:22-10.0.0.1:32774.service: Deactivated successfully. Jan 14 01:23:23.090387 systemd[1]: session-27.scope: Deactivated successfully. Jan 14 01:23:23.094322 systemd-logind[1587]: Removed session 27. Jan 14 01:23:23.078000 audit[5376]: CRED_DISP pid=5376 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:23.113349 kernel: audit: type=1106 audit(1768353803.077:915): pid=5376 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:23.113649 kernel: audit: type=1104 audit(1768353803.078:916): pid=5376 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:23:23.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.134:22-10.0.0.1:32774 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:23.395111 containerd[1611]: time="2026-01-14T01:23:23.394614066Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:23:23.462061 containerd[1611]: time="2026-01-14T01:23:23.461955533Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:23:23.464269 containerd[1611]: time="2026-01-14T01:23:23.464146394Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:23:23.464348 containerd[1611]: time="2026-01-14T01:23:23.464267384Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:23:23.465884 kubelet[2787]: E0114 01:23:23.465793 2787 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:23:23.466497 kubelet[2787]: E0114 01:23:23.465887 2787 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:23:23.466497 kubelet[2787]: E0114 01:23:23.466188 2787 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pgsfz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-59555f9565-zxzlc_calico-system(7724ac30-d973-433e-90c7-10adfa17a249): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:23:23.467655 kubelet[2787]: E0114 01:23:23.467610 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59555f9565-zxzlc" podUID="7724ac30-d973-433e-90c7-10adfa17a249"