Jan 24 12:14:04.443570 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Sat Jan 24 09:07:34 -00 2026 Jan 24 12:14:04.443592 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7953d3c7acaad6ee79638a10c67ea9f0b3a8597919989b6fbf2f9a1742d4ba63 Jan 24 12:14:04.443611 kernel: BIOS-provided physical RAM map: Jan 24 12:14:04.443622 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 24 12:14:04.443630 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 24 12:14:04.443638 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 24 12:14:04.443647 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 24 12:14:04.443656 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 24 12:14:04.443664 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 24 12:14:04.443673 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 24 12:14:04.443685 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 24 12:14:04.443693 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 24 12:14:04.443701 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 24 12:14:04.443710 kernel: NX (Execute Disable) protection: active Jan 24 12:14:04.443720 kernel: APIC: Static calls initialized Jan 24 12:14:04.443732 kernel: SMBIOS 2.8 present. Jan 24 12:14:04.443745 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 24 12:14:04.443754 kernel: DMI: Memory slots populated: 1/1 Jan 24 12:14:04.443762 kernel: Hypervisor detected: KVM Jan 24 12:14:04.443771 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 24 12:14:04.443780 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 24 12:14:04.443789 kernel: kvm-clock: using sched offset of 11212713426 cycles Jan 24 12:14:04.443799 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 24 12:14:04.443990 kernel: tsc: Detected 2445.426 MHz processor Jan 24 12:14:04.444013 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 24 12:14:04.444024 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 24 12:14:04.444033 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 24 12:14:04.444043 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 24 12:14:04.444052 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 24 12:14:04.444062 kernel: Using GB pages for direct mapping Jan 24 12:14:04.444071 kernel: ACPI: Early table checksum verification disabled Jan 24 12:14:04.444083 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 24 12:14:04.444093 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 12:14:04.444102 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 12:14:04.444112 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 12:14:04.444122 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 24 12:14:04.444134 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 12:14:04.444144 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 12:14:04.444157 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 12:14:04.444166 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 12:14:04.444180 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 24 12:14:04.444190 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 24 12:14:04.444199 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 24 12:14:04.444209 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 24 12:14:04.444222 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 24 12:14:04.444232 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 24 12:14:04.444244 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 24 12:14:04.444256 kernel: No NUMA configuration found Jan 24 12:14:04.444266 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 24 12:14:04.444275 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jan 24 12:14:04.444288 kernel: Zone ranges: Jan 24 12:14:04.444298 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 24 12:14:04.444308 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 24 12:14:04.444317 kernel: Normal empty Jan 24 12:14:04.444327 kernel: Device empty Jan 24 12:14:04.444337 kernel: Movable zone start for each node Jan 24 12:14:04.444346 kernel: Early memory node ranges Jan 24 12:14:04.444356 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 24 12:14:04.444369 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 24 12:14:04.444380 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 24 12:14:04.444391 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 12:14:04.444401 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 24 12:14:04.444412 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 24 12:14:04.444422 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 24 12:14:04.444433 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 24 12:14:04.444447 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 24 12:14:04.444458 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 24 12:14:04.444468 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 24 12:14:04.444477 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 24 12:14:04.444487 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 24 12:14:04.444497 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 24 12:14:04.444506 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 24 12:14:04.444516 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 24 12:14:04.444529 kernel: TSC deadline timer available Jan 24 12:14:04.444538 kernel: CPU topo: Max. logical packages: 1 Jan 24 12:14:04.444548 kernel: CPU topo: Max. logical dies: 1 Jan 24 12:14:04.444558 kernel: CPU topo: Max. dies per package: 1 Jan 24 12:14:04.444568 kernel: CPU topo: Max. threads per core: 1 Jan 24 12:14:04.444579 kernel: CPU topo: Num. cores per package: 4 Jan 24 12:14:04.444589 kernel: CPU topo: Num. threads per package: 4 Jan 24 12:14:04.444603 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 24 12:14:04.444613 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 24 12:14:04.444623 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 24 12:14:04.444634 kernel: kvm-guest: setup PV sched yield Jan 24 12:14:04.444644 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 24 12:14:04.444655 kernel: Booting paravirtualized kernel on KVM Jan 24 12:14:04.444665 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 24 12:14:04.444676 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 24 12:14:04.444689 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 24 12:14:04.444700 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 24 12:14:04.444710 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 24 12:14:04.444720 kernel: kvm-guest: PV spinlocks enabled Jan 24 12:14:04.444733 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 24 12:14:04.444744 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7953d3c7acaad6ee79638a10c67ea9f0b3a8597919989b6fbf2f9a1742d4ba63 Jan 24 12:14:04.444757 kernel: random: crng init done Jan 24 12:14:04.444766 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 24 12:14:04.444776 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 24 12:14:04.444785 kernel: Fallback order for Node 0: 0 Jan 24 12:14:04.444795 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jan 24 12:14:04.444804 kernel: Policy zone: DMA32 Jan 24 12:14:04.444978 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 24 12:14:04.444992 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 24 12:14:04.445002 kernel: ftrace: allocating 40128 entries in 157 pages Jan 24 12:14:04.445011 kernel: ftrace: allocated 157 pages with 5 groups Jan 24 12:14:04.445021 kernel: Dynamic Preempt: voluntary Jan 24 12:14:04.445031 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 24 12:14:04.445044 kernel: rcu: RCU event tracing is enabled. Jan 24 12:14:04.445055 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 24 12:14:04.445065 kernel: Trampoline variant of Tasks RCU enabled. Jan 24 12:14:04.445079 kernel: Rude variant of Tasks RCU enabled. Jan 24 12:14:04.445089 kernel: Tracing variant of Tasks RCU enabled. Jan 24 12:14:04.445098 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 24 12:14:04.445108 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 24 12:14:04.445118 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 12:14:04.445127 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 12:14:04.445137 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 12:14:04.445150 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 24 12:14:04.445161 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 24 12:14:04.445180 kernel: Console: colour VGA+ 80x25 Jan 24 12:14:04.445194 kernel: printk: legacy console [ttyS0] enabled Jan 24 12:14:04.445205 kernel: ACPI: Core revision 20240827 Jan 24 12:14:04.445216 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 24 12:14:04.445227 kernel: APIC: Switch to symmetric I/O mode setup Jan 24 12:14:04.445238 kernel: x2apic enabled Jan 24 12:14:04.445249 kernel: APIC: Switched APIC routing to: physical x2apic Jan 24 12:14:04.445263 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 24 12:14:04.445274 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 24 12:14:04.445285 kernel: kvm-guest: setup PV IPIs Jan 24 12:14:04.445296 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 24 12:14:04.445310 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 24 12:14:04.445321 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 24 12:14:04.445333 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 24 12:14:04.445344 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 24 12:14:04.445355 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 24 12:14:04.445366 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 24 12:14:04.445377 kernel: Spectre V2 : Mitigation: Retpolines Jan 24 12:14:04.445390 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 24 12:14:04.445401 kernel: Speculative Store Bypass: Vulnerable Jan 24 12:14:04.445412 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 24 12:14:04.445424 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 24 12:14:04.445435 kernel: active return thunk: srso_alias_return_thunk Jan 24 12:14:04.445446 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 24 12:14:04.445457 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 24 12:14:04.445471 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 12:14:04.445482 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 24 12:14:04.445493 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 24 12:14:04.445504 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 24 12:14:04.445515 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 24 12:14:04.445527 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 24 12:14:04.445537 kernel: Freeing SMP alternatives memory: 32K Jan 24 12:14:04.445551 kernel: pid_max: default: 32768 minimum: 301 Jan 24 12:14:04.445562 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 24 12:14:04.445573 kernel: landlock: Up and running. Jan 24 12:14:04.445584 kernel: SELinux: Initializing. Jan 24 12:14:04.445595 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 12:14:04.445606 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 12:14:04.445617 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 24 12:14:04.445631 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 24 12:14:04.445642 kernel: signal: max sigframe size: 1776 Jan 24 12:14:04.445653 kernel: rcu: Hierarchical SRCU implementation. Jan 24 12:14:04.445664 kernel: rcu: Max phase no-delay instances is 400. Jan 24 12:14:04.445675 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 24 12:14:04.445686 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 24 12:14:04.445697 kernel: smp: Bringing up secondary CPUs ... Jan 24 12:14:04.445711 kernel: smpboot: x86: Booting SMP configuration: Jan 24 12:14:04.445722 kernel: .... node #0, CPUs: #1 #2 #3 Jan 24 12:14:04.445733 kernel: smp: Brought up 1 node, 4 CPUs Jan 24 12:14:04.445744 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 24 12:14:04.445757 kernel: Memory: 2445284K/2571752K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15536K init, 2500K bss, 120528K reserved, 0K cma-reserved) Jan 24 12:14:04.445770 kernel: devtmpfs: initialized Jan 24 12:14:04.445777 kernel: x86/mm: Memory block size: 128MB Jan 24 12:14:04.445788 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 24 12:14:04.445795 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 24 12:14:04.445802 kernel: pinctrl core: initialized pinctrl subsystem Jan 24 12:14:04.445953 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 24 12:14:04.445962 kernel: audit: initializing netlink subsys (disabled) Jan 24 12:14:04.445970 kernel: audit: type=2000 audit(1769256836.478:1): state=initialized audit_enabled=0 res=1 Jan 24 12:14:04.445977 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 24 12:14:04.445988 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 24 12:14:04.445995 kernel: cpuidle: using governor menu Jan 24 12:14:04.446002 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 24 12:14:04.446009 kernel: dca service started, version 1.12.1 Jan 24 12:14:04.446017 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 24 12:14:04.446024 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 24 12:14:04.446031 kernel: PCI: Using configuration type 1 for base access Jan 24 12:14:04.446041 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 24 12:14:04.446048 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 24 12:14:04.446055 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 24 12:14:04.446062 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 24 12:14:04.446070 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 24 12:14:04.446077 kernel: ACPI: Added _OSI(Module Device) Jan 24 12:14:04.446084 kernel: ACPI: Added _OSI(Processor Device) Jan 24 12:14:04.446094 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 24 12:14:04.446101 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 24 12:14:04.446108 kernel: ACPI: Interpreter enabled Jan 24 12:14:04.446115 kernel: ACPI: PM: (supports S0 S3 S5) Jan 24 12:14:04.446123 kernel: ACPI: Using IOAPIC for interrupt routing Jan 24 12:14:04.446130 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 24 12:14:04.446137 kernel: PCI: Using E820 reservations for host bridge windows Jan 24 12:14:04.446144 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 24 12:14:04.446154 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 24 12:14:04.446454 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 24 12:14:04.446691 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 24 12:14:04.447093 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 24 12:14:04.447110 kernel: PCI host bridge to bus 0000:00 Jan 24 12:14:04.447334 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 24 12:14:04.447536 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 24 12:14:04.447734 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 24 12:14:04.448127 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 24 12:14:04.448332 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 24 12:14:04.448492 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 24 12:14:04.448654 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 24 12:14:04.449087 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 24 12:14:04.449333 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 24 12:14:04.449562 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 24 12:14:04.449775 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 24 12:14:04.450390 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 24 12:14:04.450607 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 24 12:14:04.450987 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 10742 usecs Jan 24 12:14:04.451175 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 24 12:14:04.451343 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jan 24 12:14:04.451550 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 24 12:14:04.451773 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 24 12:14:04.452195 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 24 12:14:04.452428 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jan 24 12:14:04.452638 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 24 12:14:04.453002 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 24 12:14:04.453246 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 24 12:14:04.453463 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jan 24 12:14:04.453680 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jan 24 12:14:04.454175 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 24 12:14:04.454351 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 24 12:14:04.454524 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 24 12:14:04.454725 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 24 12:14:04.455297 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 24 12:14:04.455522 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jan 24 12:14:04.455768 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jan 24 12:14:04.456197 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 24 12:14:04.456416 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 24 12:14:04.456434 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 24 12:14:04.456442 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 24 12:14:04.456449 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 24 12:14:04.456457 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 24 12:14:04.456465 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 24 12:14:04.456472 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 24 12:14:04.456482 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 24 12:14:04.456489 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 24 12:14:04.456497 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 24 12:14:04.456504 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 24 12:14:04.456511 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 24 12:14:04.456519 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 24 12:14:04.456526 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 24 12:14:04.456538 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 24 12:14:04.456555 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 24 12:14:04.456567 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 24 12:14:04.456577 kernel: iommu: Default domain type: Translated Jan 24 12:14:04.456587 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 24 12:14:04.456598 kernel: PCI: Using ACPI for IRQ routing Jan 24 12:14:04.456608 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 24 12:14:04.456618 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 24 12:14:04.456632 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 24 12:14:04.457032 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 24 12:14:04.457258 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 24 12:14:04.457430 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 24 12:14:04.457440 kernel: vgaarb: loaded Jan 24 12:14:04.457448 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 24 12:14:04.457455 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 24 12:14:04.457467 kernel: clocksource: Switched to clocksource kvm-clock Jan 24 12:14:04.457475 kernel: VFS: Disk quotas dquot_6.6.0 Jan 24 12:14:04.457482 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 24 12:14:04.457489 kernel: pnp: PnP ACPI init Jan 24 12:14:04.457670 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 24 12:14:04.457681 kernel: pnp: PnP ACPI: found 6 devices Jan 24 12:14:04.457692 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 24 12:14:04.457699 kernel: NET: Registered PF_INET protocol family Jan 24 12:14:04.457707 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 24 12:14:04.457714 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 24 12:14:04.457722 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 24 12:14:04.457729 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 24 12:14:04.457737 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 24 12:14:04.457746 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 24 12:14:04.457754 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 12:14:04.457761 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 12:14:04.457768 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 24 12:14:04.457776 kernel: NET: Registered PF_XDP protocol family Jan 24 12:14:04.458191 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 24 12:14:04.458372 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 24 12:14:04.458536 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 24 12:14:04.458739 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 24 12:14:04.459157 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 24 12:14:04.459359 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 24 12:14:04.459372 kernel: PCI: CLS 0 bytes, default 64 Jan 24 12:14:04.459379 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 24 12:14:04.459387 kernel: Initialise system trusted keyrings Jan 24 12:14:04.459399 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 24 12:14:04.459407 kernel: Key type asymmetric registered Jan 24 12:14:04.459414 kernel: Asymmetric key parser 'x509' registered Jan 24 12:14:04.459422 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 24 12:14:04.459429 kernel: io scheduler mq-deadline registered Jan 24 12:14:04.459436 kernel: io scheduler kyber registered Jan 24 12:14:04.459444 kernel: io scheduler bfq registered Jan 24 12:14:04.459453 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 24 12:14:04.459461 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 24 12:14:04.459469 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 24 12:14:04.459476 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 24 12:14:04.459483 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 24 12:14:04.459491 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 24 12:14:04.459498 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 24 12:14:04.459508 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 24 12:14:04.459515 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 24 12:14:04.459522 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 24 12:14:04.459695 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 24 12:14:04.460038 kernel: rtc_cmos 00:04: registered as rtc0 Jan 24 12:14:04.460310 kernel: rtc_cmos 00:04: setting system clock to 2026-01-24T12:14:01 UTC (1769256841) Jan 24 12:14:04.460523 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 24 12:14:04.460543 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 24 12:14:04.460554 kernel: NET: Registered PF_INET6 protocol family Jan 24 12:14:04.460564 kernel: Segment Routing with IPv6 Jan 24 12:14:04.460574 kernel: In-situ OAM (IOAM) with IPv6 Jan 24 12:14:04.460584 kernel: NET: Registered PF_PACKET protocol family Jan 24 12:14:04.460594 kernel: Key type dns_resolver registered Jan 24 12:14:04.460609 kernel: IPI shorthand broadcast: enabled Jan 24 12:14:04.460620 kernel: sched_clock: Marking stable (3767059642, 1990841867)->(6732688349, -974786840) Jan 24 12:14:04.460630 kernel: registered taskstats version 1 Jan 24 12:14:04.460642 kernel: Loading compiled-in X.509 certificates Jan 24 12:14:04.460654 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: a97c6138cc1b5c46f82656a7e055bcfc44b38b5c' Jan 24 12:14:04.460667 kernel: Demotion targets for Node 0: null Jan 24 12:14:04.460677 kernel: Key type .fscrypt registered Jan 24 12:14:04.460684 kernel: Key type fscrypt-provisioning registered Jan 24 12:14:04.460695 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 24 12:14:04.460702 kernel: ima: Allocated hash algorithm: sha1 Jan 24 12:14:04.460709 kernel: ima: No architecture policies found Jan 24 12:14:04.460717 kernel: clk: Disabling unused clocks Jan 24 12:14:04.460724 kernel: Freeing unused kernel image (initmem) memory: 15536K Jan 24 12:14:04.460731 kernel: Write protecting the kernel read-only data: 47104k Jan 24 12:14:04.460741 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 24 12:14:04.460748 kernel: Run /init as init process Jan 24 12:14:04.460756 kernel: with arguments: Jan 24 12:14:04.460763 kernel: /init Jan 24 12:14:04.460770 kernel: with environment: Jan 24 12:14:04.460777 kernel: HOME=/ Jan 24 12:14:04.460785 kernel: TERM=linux Jan 24 12:14:04.460792 kernel: SCSI subsystem initialized Jan 24 12:14:04.460801 kernel: libata version 3.00 loaded. Jan 24 12:14:04.461176 kernel: ahci 0000:00:1f.2: version 3.0 Jan 24 12:14:04.461194 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 24 12:14:04.461410 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 24 12:14:04.461637 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 24 12:14:04.462146 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 24 12:14:04.462402 kernel: scsi host0: ahci Jan 24 12:14:04.462642 kernel: scsi host1: ahci Jan 24 12:14:04.463032 kernel: scsi host2: ahci Jan 24 12:14:04.463251 kernel: scsi host3: ahci Jan 24 12:14:04.463495 kernel: scsi host4: ahci Jan 24 12:14:04.463731 kernel: scsi host5: ahci Jan 24 12:14:04.463753 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Jan 24 12:14:04.463765 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Jan 24 12:14:04.463776 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Jan 24 12:14:04.463787 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Jan 24 12:14:04.463798 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Jan 24 12:14:04.464085 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Jan 24 12:14:04.464107 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 24 12:14:04.464118 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 24 12:14:04.464129 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 24 12:14:04.464140 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 24 12:14:04.464150 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 24 12:14:04.464161 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 24 12:14:04.464171 kernel: ata3.00: LPM support broken, forcing max_power Jan 24 12:14:04.464185 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 24 12:14:04.464197 kernel: ata3.00: applying bridge limits Jan 24 12:14:04.464210 kernel: ata3.00: LPM support broken, forcing max_power Jan 24 12:14:04.464220 kernel: ata3.00: configured for UDMA/100 Jan 24 12:14:04.464488 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 24 12:14:04.464732 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 24 12:14:04.465248 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 24 12:14:04.465269 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 24 12:14:04.465281 kernel: GPT:16515071 != 27000831 Jan 24 12:14:04.465291 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 24 12:14:04.465302 kernel: GPT:16515071 != 27000831 Jan 24 12:14:04.465312 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 24 12:14:04.465322 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 12:14:04.465567 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 24 12:14:04.465583 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 24 12:14:04.465975 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 24 12:14:04.465992 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 24 12:14:04.466000 kernel: device-mapper: uevent: version 1.0.3 Jan 24 12:14:04.466008 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 24 12:14:04.466020 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 24 12:14:04.466028 kernel: raid6: avx2x4 gen() 27372 MB/s Jan 24 12:14:04.466035 kernel: raid6: avx2x2 gen() 25343 MB/s Jan 24 12:14:04.466045 kernel: raid6: avx2x1 gen() 17351 MB/s Jan 24 12:14:04.466053 kernel: raid6: using algorithm avx2x4 gen() 27372 MB/s Jan 24 12:14:04.466060 kernel: raid6: .... xor() 5452 MB/s, rmw enabled Jan 24 12:14:04.466068 kernel: raid6: using avx2x2 recovery algorithm Jan 24 12:14:04.466076 kernel: xor: automatically using best checksumming function avx Jan 24 12:14:04.466086 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 24 12:14:04.466096 kernel: BTRFS: device fsid d3bd77fc-0f38-45e2-bb37-1f1b4d0917b8 devid 1 transid 34 /dev/mapper/usr (253:0) scanned by mount (181) Jan 24 12:14:04.466104 kernel: BTRFS info (device dm-0): first mount of filesystem d3bd77fc-0f38-45e2-bb37-1f1b4d0917b8 Jan 24 12:14:04.466111 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 24 12:14:04.466121 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 24 12:14:04.466129 kernel: BTRFS info (device dm-0): enabling free space tree Jan 24 12:14:04.466137 kernel: loop: module loaded Jan 24 12:14:04.466145 kernel: loop0: detected capacity change from 0 to 100552 Jan 24 12:14:04.466152 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 24 12:14:04.466161 systemd[1]: Successfully made /usr/ read-only. Jan 24 12:14:04.466171 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 24 12:14:04.466182 systemd[1]: Detected virtualization kvm. Jan 24 12:14:04.466190 systemd[1]: Detected architecture x86-64. Jan 24 12:14:04.466198 systemd[1]: Running in initrd. Jan 24 12:14:04.466205 systemd[1]: No hostname configured, using default hostname. Jan 24 12:14:04.466214 systemd[1]: Hostname set to . Jan 24 12:14:04.466223 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 24 12:14:04.466231 systemd[1]: Queued start job for default target initrd.target. Jan 24 12:14:04.466239 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 24 12:14:04.466247 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 12:14:04.466256 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 12:14:04.466271 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 24 12:14:04.466286 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 12:14:04.466302 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 24 12:14:04.466314 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 24 12:14:04.466325 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 12:14:04.466337 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 12:14:04.466349 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 24 12:14:04.466360 systemd[1]: Reached target paths.target - Path Units. Jan 24 12:14:04.466375 systemd[1]: Reached target slices.target - Slice Units. Jan 24 12:14:04.466386 systemd[1]: Reached target swap.target - Swaps. Jan 24 12:14:04.466400 systemd[1]: Reached target timers.target - Timer Units. Jan 24 12:14:04.466415 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 12:14:04.466427 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 12:14:04.466439 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 24 12:14:04.466450 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 12:14:04.466466 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 24 12:14:04.466477 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 12:14:04.466488 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 12:14:04.466499 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 12:14:04.466510 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 12:14:04.466522 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 24 12:14:04.466540 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 24 12:14:04.466551 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 12:14:04.466563 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 24 12:14:04.466574 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 24 12:14:04.466586 systemd[1]: Starting systemd-fsck-usr.service... Jan 24 12:14:04.466597 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 12:14:04.466609 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 12:14:04.466624 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 12:14:04.466670 systemd-journald[319]: Collecting audit messages is enabled. Jan 24 12:14:04.466701 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 24 12:14:04.466714 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 12:14:04.466726 kernel: audit: type=1130 audit(1769256844.457:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:04.466737 systemd-journald[319]: Journal started Jan 24 12:14:04.466763 systemd-journald[319]: Runtime Journal (/run/log/journal/8228d0f75b1e4f8996191d1a2dfbc081) is 6M, max 48.2M, 42.1M free. Jan 24 12:14:04.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:04.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:04.512520 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 12:14:04.524381 kernel: audit: type=1130 audit(1769256844.494:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:04.524411 kernel: audit: type=1130 audit(1769256844.511:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:04.524428 kernel: audit: type=1130 audit(1769256844.518:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:04.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:04.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:04.513496 systemd[1]: Finished systemd-fsck-usr.service. Jan 24 12:14:04.960376 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 24 12:14:04.960410 kernel: Bridge firewalling registered Jan 24 12:14:04.522700 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 12:14:04.540024 systemd-modules-load[322]: Inserted module 'br_netfilter' Jan 24 12:14:04.935462 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 12:14:04.983544 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 12:14:05.029072 kernel: audit: type=1130 audit(1769256844.991:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:04.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:05.030354 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 12:14:05.062018 kernel: audit: type=1130 audit(1769256845.038:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:05.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:05.044315 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 12:14:05.072780 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 12:14:05.082509 systemd-tmpfiles[331]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 24 12:14:05.115659 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 12:14:05.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:05.144124 kernel: audit: type=1130 audit(1769256845.122:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:05.153751 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 12:14:05.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:05.179019 kernel: audit: type=1130 audit(1769256845.154:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:05.180204 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 12:14:05.229449 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 12:14:05.262223 kernel: audit: type=1130 audit(1769256845.238:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:05.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:05.240168 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 12:14:05.300470 kernel: audit: type=1130 audit(1769256845.267:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:05.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:05.300694 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 12:14:05.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:05.318436 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 24 12:14:05.324000 audit: BPF prog-id=6 op=LOAD Jan 24 12:14:05.326569 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 12:14:05.367614 dracut-cmdline[355]: dracut-109 Jan 24 12:14:05.379470 dracut-cmdline[355]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7953d3c7acaad6ee79638a10c67ea9f0b3a8597919989b6fbf2f9a1742d4ba63 Jan 24 12:14:05.409020 systemd-resolved[356]: Positive Trust Anchors: Jan 24 12:14:05.409034 systemd-resolved[356]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 12:14:05.409041 systemd-resolved[356]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 24 12:14:05.409084 systemd-resolved[356]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 12:14:05.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:05.438110 systemd-resolved[356]: Defaulting to hostname 'linux'. Jan 24 12:14:05.440294 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 12:14:05.449636 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 12:14:05.732146 kernel: Loading iSCSI transport class v2.0-870. Jan 24 12:14:05.761240 kernel: iscsi: registered transport (tcp) Jan 24 12:14:05.802191 kernel: iscsi: registered transport (qla4xxx) Jan 24 12:14:05.802258 kernel: QLogic iSCSI HBA Driver Jan 24 12:14:05.866077 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 24 12:14:05.918704 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 12:14:05.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:05.937655 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 24 12:14:06.034443 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 24 12:14:06.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:06.044235 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 24 12:14:06.054468 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 24 12:14:06.131105 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 24 12:14:06.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:06.141000 audit: BPF prog-id=7 op=LOAD Jan 24 12:14:06.141000 audit: BPF prog-id=8 op=LOAD Jan 24 12:14:06.143259 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 12:14:06.201439 systemd-udevd[580]: Using default interface naming scheme 'v257'. Jan 24 12:14:06.226594 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 12:14:06.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:06.240137 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 24 12:14:06.305544 dracut-pre-trigger[629]: rd.md=0: removing MD RAID activation Jan 24 12:14:06.381643 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 12:14:06.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:06.397533 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 12:14:06.414793 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 12:14:06.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:06.433000 audit: BPF prog-id=9 op=LOAD Jan 24 12:14:06.435406 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 12:14:06.548567 systemd-networkd[726]: lo: Link UP Jan 24 12:14:06.548639 systemd-networkd[726]: lo: Gained carrier Jan 24 12:14:06.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:06.550503 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 12:14:06.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:06.564296 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 12:14:06.585773 systemd[1]: Reached target network.target - Network. Jan 24 12:14:06.608652 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 24 12:14:06.689794 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 24 12:14:06.768215 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 24 12:14:06.800375 kernel: cryptd: max_cpu_qlen set to 1000 Jan 24 12:14:06.823804 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 24 12:14:06.824742 systemd-networkd[726]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 24 12:14:06.824747 systemd-networkd[726]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 12:14:06.827598 systemd-networkd[726]: eth0: Link UP Jan 24 12:14:06.828557 systemd-networkd[726]: eth0: Gained carrier Jan 24 12:14:06.828570 systemd-networkd[726]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 24 12:14:06.873347 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 24 12:14:06.894237 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 24 12:14:06.904645 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 12:14:06.905016 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 12:14:06.952061 kernel: AES CTR mode by8 optimization enabled Jan 24 12:14:06.938000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:06.939236 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 12:14:06.950471 systemd-networkd[726]: eth0: DHCPv4 address 10.0.0.151/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 24 12:14:06.953733 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 12:14:06.999536 disk-uuid[809]: Primary Header is updated. Jan 24 12:14:06.999536 disk-uuid[809]: Secondary Entries is updated. Jan 24 12:14:06.999536 disk-uuid[809]: Secondary Header is updated. Jan 24 12:14:07.038633 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 24 12:14:07.168421 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 24 12:14:07.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:07.560272 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 12:14:07.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:07.580039 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 12:14:07.590419 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 12:14:07.609402 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 12:14:07.635428 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 24 12:14:07.694350 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 24 12:14:07.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:08.054575 systemd-networkd[726]: eth0: Gained IPv6LL Jan 24 12:14:08.070391 disk-uuid[826]: Warning: The kernel is still using the old partition table. Jan 24 12:14:08.070391 disk-uuid[826]: The new table will be used at the next reboot or after you Jan 24 12:14:08.070391 disk-uuid[826]: run partprobe(8) or kpartx(8) Jan 24 12:14:08.070391 disk-uuid[826]: The operation has completed successfully. Jan 24 12:14:08.112538 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 24 12:14:08.112797 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 24 12:14:08.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:08.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:08.130415 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 24 12:14:08.195187 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (860) Jan 24 12:14:08.195245 kernel: BTRFS info (device vda6): first mount of filesystem 1b92a19b-e1e6-4749-8204-553c8c72e265 Jan 24 12:14:08.206101 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 12:14:08.225210 kernel: BTRFS info (device vda6): turning on async discard Jan 24 12:14:08.225277 kernel: BTRFS info (device vda6): enabling free space tree Jan 24 12:14:08.247141 kernel: BTRFS info (device vda6): last unmount of filesystem 1b92a19b-e1e6-4749-8204-553c8c72e265 Jan 24 12:14:08.252480 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 24 12:14:08.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:08.262073 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 24 12:14:08.419567 ignition[879]: Ignition 2.24.0 Jan 24 12:14:08.419641 ignition[879]: Stage: fetch-offline Jan 24 12:14:08.419700 ignition[879]: no configs at "/usr/lib/ignition/base.d" Jan 24 12:14:08.419713 ignition[879]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 12:14:08.419794 ignition[879]: parsed url from cmdline: "" Jan 24 12:14:08.419798 ignition[879]: no config URL provided Jan 24 12:14:08.419803 ignition[879]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 12:14:08.419988 ignition[879]: no config at "/usr/lib/ignition/user.ign" Jan 24 12:14:08.420042 ignition[879]: op(1): [started] loading QEMU firmware config module Jan 24 12:14:08.420049 ignition[879]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 24 12:14:08.443404 ignition[879]: op(1): [finished] loading QEMU firmware config module Jan 24 12:14:08.980805 ignition[879]: parsing config with SHA512: b6af94449a9ff222fa73dd727e8bb34ef78a086e91c017daeb3781d9fb8944d120ab15b158c17407ee38c2e2ef67740c05c61178a03bd051b749706bac5a0c3c Jan 24 12:14:09.001746 unknown[879]: fetched base config from "system" Jan 24 12:14:09.002017 unknown[879]: fetched user config from "qemu" Jan 24 12:14:09.016040 ignition[879]: fetch-offline: fetch-offline passed Jan 24 12:14:09.016167 ignition[879]: Ignition finished successfully Jan 24 12:14:09.032252 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 12:14:09.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:09.043070 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 24 12:14:09.044285 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 24 12:14:09.121461 ignition[888]: Ignition 2.24.0 Jan 24 12:14:09.121526 ignition[888]: Stage: kargs Jan 24 12:14:09.121688 ignition[888]: no configs at "/usr/lib/ignition/base.d" Jan 24 12:14:09.121701 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 12:14:09.143004 ignition[888]: kargs: kargs passed Jan 24 12:14:09.143132 ignition[888]: Ignition finished successfully Jan 24 12:14:09.156363 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 24 12:14:09.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:09.158181 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 24 12:14:09.225237 ignition[895]: Ignition 2.24.0 Jan 24 12:14:09.225309 ignition[895]: Stage: disks Jan 24 12:14:09.225500 ignition[895]: no configs at "/usr/lib/ignition/base.d" Jan 24 12:14:09.225514 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 12:14:09.238267 ignition[895]: disks: disks passed Jan 24 12:14:09.239297 ignition[895]: Ignition finished successfully Jan 24 12:14:09.258684 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 24 12:14:09.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:09.265526 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 24 12:14:09.272418 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 12:14:09.288375 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 12:14:09.318043 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 12:14:09.326587 systemd[1]: Reached target basic.target - Basic System. Jan 24 12:14:09.348699 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 24 12:14:09.435528 systemd-fsck[905]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 24 12:14:09.448482 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 24 12:14:09.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:09.461590 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 24 12:14:09.773288 kernel: EXT4-fs (vda9): mounted filesystem 04920273-eebf-4ad5-828c-7340043c8075 r/w with ordered data mode. Quota mode: none. Jan 24 12:14:09.774229 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 24 12:14:09.782082 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 24 12:14:09.799258 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 12:14:09.836475 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 24 12:14:09.858626 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (913) Jan 24 12:14:09.844350 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 24 12:14:09.898451 kernel: BTRFS info (device vda6): first mount of filesystem 1b92a19b-e1e6-4749-8204-553c8c72e265 Jan 24 12:14:09.898484 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 12:14:09.844402 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 24 12:14:09.844433 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 12:14:09.885425 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 24 12:14:09.907357 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 24 12:14:09.967297 kernel: BTRFS info (device vda6): turning on async discard Jan 24 12:14:09.967325 kernel: BTRFS info (device vda6): enabling free space tree Jan 24 12:14:09.969655 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 12:14:10.299241 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 24 12:14:10.336083 kernel: kauditd_printk_skb: 25 callbacks suppressed Jan 24 12:14:10.336111 kernel: audit: type=1130 audit(1769256850.304:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:10.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:10.307321 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 24 12:14:10.337206 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 24 12:14:10.398487 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 24 12:14:10.411782 kernel: BTRFS info (device vda6): last unmount of filesystem 1b92a19b-e1e6-4749-8204-553c8c72e265 Jan 24 12:14:10.434204 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 24 12:14:10.459332 kernel: audit: type=1130 audit(1769256850.438:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:10.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:10.473065 ignition[1010]: INFO : Ignition 2.24.0 Jan 24 12:14:10.473065 ignition[1010]: INFO : Stage: mount Jan 24 12:14:10.482380 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 12:14:10.482380 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 12:14:10.500629 ignition[1010]: INFO : mount: mount passed Jan 24 12:14:10.500629 ignition[1010]: INFO : Ignition finished successfully Jan 24 12:14:10.506236 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 24 12:14:10.537570 kernel: audit: type=1130 audit(1769256850.508:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:10.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:10.510397 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 24 12:14:10.777733 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 12:14:10.833225 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1023) Jan 24 12:14:10.833298 kernel: BTRFS info (device vda6): first mount of filesystem 1b92a19b-e1e6-4749-8204-553c8c72e265 Jan 24 12:14:10.846246 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 12:14:10.871569 kernel: BTRFS info (device vda6): turning on async discard Jan 24 12:14:10.871662 kernel: BTRFS info (device vda6): enabling free space tree Jan 24 12:14:10.874725 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 12:14:10.948339 ignition[1040]: INFO : Ignition 2.24.0 Jan 24 12:14:10.948339 ignition[1040]: INFO : Stage: files Jan 24 12:14:10.958611 ignition[1040]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 12:14:10.958611 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 12:14:10.958611 ignition[1040]: DEBUG : files: compiled without relabeling support, skipping Jan 24 12:14:10.958611 ignition[1040]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 24 12:14:10.958611 ignition[1040]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 24 12:14:11.003361 ignition[1040]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 24 12:14:11.003361 ignition[1040]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 24 12:14:11.003361 ignition[1040]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 24 12:14:11.003361 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 24 12:14:11.003361 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 24 12:14:10.968318 unknown[1040]: wrote ssh authorized keys file for user: core Jan 24 12:14:11.105303 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 24 12:14:11.216441 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 24 12:14:11.216441 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 24 12:14:11.242432 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 24 12:14:11.242432 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 24 12:14:11.242432 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 24 12:14:11.242432 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 12:14:11.242432 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 12:14:11.242432 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 12:14:11.242432 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 12:14:11.242432 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 12:14:11.242432 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 12:14:11.242432 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 12:14:11.242432 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 12:14:11.242432 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 12:14:11.242432 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 24 12:14:12.059421 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 24 12:14:12.459688 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 12:14:12.459688 ignition[1040]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 24 12:14:12.486715 ignition[1040]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 12:14:12.486715 ignition[1040]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 12:14:12.486715 ignition[1040]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 24 12:14:12.486715 ignition[1040]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 24 12:14:12.486715 ignition[1040]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 24 12:14:12.486715 ignition[1040]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 24 12:14:12.486715 ignition[1040]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 24 12:14:12.486715 ignition[1040]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 24 12:14:12.644447 ignition[1040]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 24 12:14:12.657509 ignition[1040]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 24 12:14:12.669721 ignition[1040]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 24 12:14:12.669721 ignition[1040]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 24 12:14:12.669721 ignition[1040]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 24 12:14:12.669721 ignition[1040]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 24 12:14:12.669721 ignition[1040]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 24 12:14:12.669721 ignition[1040]: INFO : files: files passed Jan 24 12:14:12.669721 ignition[1040]: INFO : Ignition finished successfully Jan 24 12:14:12.732207 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 24 12:14:12.761342 kernel: audit: type=1130 audit(1769256852.737:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:12.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:12.740277 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 24 12:14:12.775450 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 24 12:14:12.801080 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 24 12:14:12.810254 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 24 12:14:12.863360 kernel: audit: type=1130 audit(1769256852.818:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:12.863387 kernel: audit: type=1131 audit(1769256852.818:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:12.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:12.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:12.859137 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 12:14:12.902482 kernel: audit: type=1130 audit(1769256852.873:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:12.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:12.902576 initrd-setup-root-after-ignition[1071]: grep: /sysroot/oem/oem-release: No such file or directory Jan 24 12:14:12.874318 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 24 12:14:12.928616 initrd-setup-root-after-ignition[1073]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 12:14:12.928616 initrd-setup-root-after-ignition[1073]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 24 12:14:12.952459 initrd-setup-root-after-ignition[1077]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 12:14:12.965105 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 24 12:14:13.079581 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 24 12:14:13.079928 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 24 12:14:13.131366 kernel: audit: type=1130 audit(1769256853.084:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:13.131403 kernel: audit: type=1131 audit(1769256853.084:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:13.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:13.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:13.086160 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 24 12:14:13.137576 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 24 12:14:13.138690 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 24 12:14:13.165366 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 24 12:14:13.239171 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 12:14:13.252665 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 24 12:14:13.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:13.299119 kernel: audit: type=1130 audit(1769256853.249:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:13.321736 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 24 12:14:13.322188 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 24 12:14:13.330406 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 12:14:13.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:13.355480 systemd[1]: Stopped target timers.target - Timer Units. Jan 24 12:14:13.361225 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 24 12:14:13.361453 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 12:14:13.368626 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 24 12:14:13.385389 systemd[1]: Stopped target basic.target - Basic System. Jan 24 12:14:13.399351 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 24 12:14:13.424461 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 12:14:13.448152 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 24 12:14:13.474262 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 24 12:14:13.500333 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 24 12:14:13.509117 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 12:14:13.526626 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 24 12:14:13.539166 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 24 12:14:13.566275 systemd[1]: Stopped target swap.target - Swaps. Jan 24 12:14:13.573218 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 24 12:14:13.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:13.573393 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 24 12:14:13.585251 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 24 12:14:13.597761 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 12:14:13.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:13.612943 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 24 12:14:13.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:13.615930 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 12:14:13.628414 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 24 12:14:13.628646 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 24 12:14:13.647330 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 24 12:14:13.647559 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 12:14:13.657064 systemd[1]: Stopped target paths.target - Path Units. Jan 24 12:14:13.670521 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 24 12:14:13.673414 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 12:14:13.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:13.686020 systemd[1]: Stopped target slices.target - Slice Units. Jan 24 12:14:13.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:13.699702 systemd[1]: Stopped target sockets.target - Socket Units. Jan 24 12:14:13.708599 systemd[1]: iscsid.socket: Deactivated successfully. Jan 24 12:14:13.708769 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 12:14:13.722069 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 24 12:14:13.722247 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 12:14:13.738561 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 24 12:14:13.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:13.738777 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 24 12:14:13.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:13.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:13.909127 ignition[1097]: INFO : Ignition 2.24.0 Jan 24 12:14:13.909127 ignition[1097]: INFO : Stage: umount Jan 24 12:14:13.909127 ignition[1097]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 12:14:13.909127 ignition[1097]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 12:14:13.909127 ignition[1097]: INFO : umount: umount passed Jan 24 12:14:13.909127 ignition[1097]: INFO : Ignition finished successfully Jan 24 12:14:13.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:13.752402 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 24 12:14:13.752580 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 12:14:13.766583 systemd[1]: ignition-files.service: Deactivated successfully. Jan 24 12:14:13.766764 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 24 12:14:13.786246 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 24 12:14:13.844129 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 24 12:14:13.844735 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 24 12:14:14.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:14.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:13.845380 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 12:14:13.876122 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 24 12:14:13.876366 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 12:14:14.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:14.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:13.886234 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 24 12:14:14.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:13.886398 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 12:14:13.900949 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 24 12:14:14.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:13.910310 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 24 12:14:14.020433 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 24 12:14:14.022212 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 24 12:14:14.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:14.022385 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 24 12:14:14.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:14.037465 systemd[1]: Stopped target network.target - Network. Jan 24 12:14:14.061652 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 24 12:14:14.061739 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 24 12:14:14.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:14.068330 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 24 12:14:14.068406 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 24 12:14:14.068649 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 24 12:14:14.068716 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 24 12:14:14.082488 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 24 12:14:14.212000 audit: BPF prog-id=6 op=UNLOAD Jan 24 12:14:14.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:14.249000 audit: BPF prog-id=9 op=UNLOAD Jan 24 12:14:14.082732 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 24 12:14:14.108714 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 24 12:14:14.124129 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 24 12:14:14.137325 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 24 12:14:14.137527 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 24 12:14:14.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:14.147740 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 24 12:14:14.148400 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 24 12:14:14.180147 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 24 12:14:14.180342 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 24 12:14:14.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:14.331000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:14.220444 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 24 12:14:14.220657 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 24 12:14:14.249242 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 24 12:14:14.257208 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 24 12:14:14.257276 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 24 12:14:14.275650 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 24 12:14:14.279683 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 24 12:14:14.279802 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 12:14:14.308571 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 12:14:14.308657 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 12:14:14.319575 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 24 12:14:14.319691 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 24 12:14:14.332173 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 12:14:14.454368 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 24 12:14:14.454655 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 24 12:14:14.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:14.482428 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 24 12:14:14.482799 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 12:14:14.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:14.507551 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 24 12:14:14.507721 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 24 12:14:14.522485 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 24 12:14:14.522544 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 12:14:14.530286 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 24 12:14:14.530355 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 24 12:14:14.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:14.572179 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 24 12:14:14.572275 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 24 12:14:14.584000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:14.601445 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 12:14:14.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:14.601622 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 12:14:14.626535 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 24 12:14:14.628157 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 24 12:14:14.628244 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 12:14:14.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:14.642505 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 24 12:14:14.642610 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 12:14:14.673648 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 24 12:14:14.673737 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 12:14:14.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:14.687000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:14.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:14.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:14.687794 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 24 12:14:14.688072 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 12:14:14.700277 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 12:14:14.700339 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 12:14:14.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:14.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:14.761576 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 24 12:14:14.761792 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 24 12:14:14.774552 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 24 12:14:14.784140 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 24 12:14:14.856398 systemd[1]: Switching root. Jan 24 12:14:14.891933 systemd-journald[319]: Received SIGTERM from PID 1 (systemd). Jan 24 12:14:14.892077 systemd-journald[319]: Journal stopped Jan 24 12:14:17.406664 kernel: SELinux: policy capability network_peer_controls=1 Jan 24 12:14:17.406742 kernel: SELinux: policy capability open_perms=1 Jan 24 12:14:17.406761 kernel: SELinux: policy capability extended_socket_class=1 Jan 24 12:14:17.406778 kernel: SELinux: policy capability always_check_network=0 Jan 24 12:14:17.406795 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 24 12:14:17.406923 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 24 12:14:17.406941 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 24 12:14:17.406962 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 24 12:14:17.407032 kernel: SELinux: policy capability userspace_initial_context=0 Jan 24 12:14:17.407051 systemd[1]: Successfully loaded SELinux policy in 125.560ms. Jan 24 12:14:17.407085 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.757ms. Jan 24 12:14:17.407105 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 24 12:14:17.407121 systemd[1]: Detected virtualization kvm. Jan 24 12:14:17.407143 systemd[1]: Detected architecture x86-64. Jan 24 12:14:17.407165 systemd[1]: Detected first boot. Jan 24 12:14:17.407195 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 24 12:14:17.407212 kernel: kauditd_printk_skb: 39 callbacks suppressed Jan 24 12:14:17.407227 kernel: audit: type=1334 audit(1769256855.329:86): prog-id=10 op=LOAD Jan 24 12:14:17.407253 kernel: audit: type=1334 audit(1769256855.329:87): prog-id=10 op=UNLOAD Jan 24 12:14:17.407272 kernel: audit: type=1334 audit(1769256855.329:88): prog-id=11 op=LOAD Jan 24 12:14:17.407288 kernel: audit: type=1334 audit(1769256855.329:89): prog-id=11 op=UNLOAD Jan 24 12:14:17.407307 zram_generator::config[1142]: No configuration found. Jan 24 12:14:17.407325 kernel: Guest personality initialized and is inactive Jan 24 12:14:17.407340 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 24 12:14:17.407364 kernel: Initialized host personality Jan 24 12:14:17.407379 kernel: NET: Registered PF_VSOCK protocol family Jan 24 12:14:17.407396 systemd[1]: Populated /etc with preset unit settings. Jan 24 12:14:17.407412 kernel: audit: type=1334 audit(1769256856.246:90): prog-id=12 op=LOAD Jan 24 12:14:17.407431 kernel: audit: type=1334 audit(1769256856.246:91): prog-id=3 op=UNLOAD Jan 24 12:14:17.407449 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 24 12:14:17.407465 kernel: audit: type=1334 audit(1769256856.247:92): prog-id=13 op=LOAD Jan 24 12:14:17.407480 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 24 12:14:17.407498 kernel: audit: type=1334 audit(1769256856.247:93): prog-id=14 op=LOAD Jan 24 12:14:17.407512 kernel: audit: type=1334 audit(1769256856.247:94): prog-id=4 op=UNLOAD Jan 24 12:14:17.407531 kernel: audit: type=1334 audit(1769256856.247:95): prog-id=5 op=UNLOAD Jan 24 12:14:17.407550 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 24 12:14:17.407573 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 24 12:14:17.407589 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 24 12:14:17.407608 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 24 12:14:17.407625 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 24 12:14:17.407642 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 24 12:14:17.407663 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 24 12:14:17.407679 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 24 12:14:17.407698 systemd[1]: Created slice user.slice - User and Session Slice. Jan 24 12:14:17.407714 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 12:14:17.407730 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 12:14:17.407746 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 24 12:14:17.407766 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 24 12:14:17.407784 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 24 12:14:17.407801 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 12:14:17.407912 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 24 12:14:17.407931 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 12:14:17.407949 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 12:14:17.407966 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 24 12:14:17.408044 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 24 12:14:17.408069 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 24 12:14:17.408086 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 24 12:14:17.408101 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 12:14:17.408117 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 12:14:17.408133 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 24 12:14:17.408150 systemd[1]: Reached target slices.target - Slice Units. Jan 24 12:14:17.408170 systemd[1]: Reached target swap.target - Swaps. Jan 24 12:14:17.408190 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 24 12:14:17.408206 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 24 12:14:17.408222 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 24 12:14:17.408238 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 24 12:14:17.408257 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 24 12:14:17.408274 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 12:14:17.408290 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 24 12:14:17.408310 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 24 12:14:17.408326 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 12:14:17.408344 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 12:14:17.408361 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 24 12:14:17.408377 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 24 12:14:17.408393 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 24 12:14:17.408409 systemd[1]: Mounting media.mount - External Media Directory... Jan 24 12:14:17.408430 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 12:14:17.408448 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 24 12:14:17.408465 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 24 12:14:17.408481 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 24 12:14:17.408497 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 24 12:14:17.408515 systemd[1]: Reached target machines.target - Containers. Jan 24 12:14:17.408532 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 24 12:14:17.408553 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 12:14:17.408569 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 12:14:17.408584 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 24 12:14:17.408603 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 12:14:17.408619 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 12:14:17.408635 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 12:14:17.408653 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 24 12:14:17.408669 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 12:14:17.408689 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 24 12:14:17.408705 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 24 12:14:17.408722 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 24 12:14:17.408737 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 24 12:14:17.408755 systemd[1]: Stopped systemd-fsck-usr.service. Jan 24 12:14:17.408778 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 24 12:14:17.408794 kernel: ACPI: bus type drm_connector registered Jan 24 12:14:17.408901 kernel: fuse: init (API version 7.41) Jan 24 12:14:17.408921 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 12:14:17.408937 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 12:14:17.408961 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 24 12:14:17.409036 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 24 12:14:17.409059 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 24 12:14:17.409078 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 12:14:17.409094 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 12:14:17.409110 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 24 12:14:17.409130 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 24 12:14:17.409176 systemd-journald[1228]: Collecting audit messages is enabled. Jan 24 12:14:17.409208 systemd[1]: Mounted media.mount - External Media Directory. Jan 24 12:14:17.409225 systemd-journald[1228]: Journal started Jan 24 12:14:17.409251 systemd-journald[1228]: Runtime Journal (/run/log/journal/8228d0f75b1e4f8996191d1a2dfbc081) is 6M, max 48.2M, 42.1M free. Jan 24 12:14:16.768000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 24 12:14:17.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:17.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:17.274000 audit: BPF prog-id=14 op=UNLOAD Jan 24 12:14:17.274000 audit: BPF prog-id=13 op=UNLOAD Jan 24 12:14:17.282000 audit: BPF prog-id=15 op=LOAD Jan 24 12:14:17.283000 audit: BPF prog-id=16 op=LOAD Jan 24 12:14:17.284000 audit: BPF prog-id=17 op=LOAD Jan 24 12:14:17.403000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 24 12:14:17.403000 audit[1228]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffc1311b320 a2=4000 a3=0 items=0 ppid=1 pid=1228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:17.403000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 24 12:14:16.220674 systemd[1]: Queued start job for default target multi-user.target. Jan 24 12:14:16.248672 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 24 12:14:16.250041 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 24 12:14:16.250630 systemd[1]: systemd-journald.service: Consumed 2.164s CPU time. Jan 24 12:14:17.423128 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 12:14:17.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:17.430118 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 24 12:14:17.435587 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 24 12:14:17.441217 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 24 12:14:17.446138 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 24 12:14:17.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:17.452162 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 12:14:17.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:17.458599 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 24 12:14:17.459504 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 24 12:14:17.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:17.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:17.465791 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 12:14:17.466332 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 12:14:17.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:17.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:17.472613 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 12:14:17.473079 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 12:14:17.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:17.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:17.478722 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 12:14:17.479144 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 12:14:17.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:17.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:17.485681 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 24 12:14:17.486053 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 24 12:14:17.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:17.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:17.492339 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 12:14:17.492617 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 12:14:17.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:17.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:17.498579 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 12:14:17.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:17.505497 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 12:14:17.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:17.514216 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 24 12:14:17.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:17.522575 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 24 12:14:17.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:17.530660 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 12:14:17.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:17.556319 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 24 12:14:17.563120 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 24 12:14:17.571679 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 24 12:14:17.589189 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 24 12:14:17.596283 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 24 12:14:17.596375 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 12:14:17.604315 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 24 12:14:17.613328 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 12:14:17.613562 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 24 12:14:17.630426 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 24 12:14:17.638144 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 24 12:14:17.643942 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 12:14:17.648582 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 24 12:14:17.655934 systemd-journald[1228]: Time spent on flushing to /var/log/journal/8228d0f75b1e4f8996191d1a2dfbc081 is 22.130ms for 1107 entries. Jan 24 12:14:17.655934 systemd-journald[1228]: System Journal (/var/log/journal/8228d0f75b1e4f8996191d1a2dfbc081) is 8M, max 163.5M, 155.5M free. Jan 24 12:14:17.693311 systemd-journald[1228]: Received client request to flush runtime journal. Jan 24 12:14:17.654413 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 12:14:17.661436 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 12:14:17.671218 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 24 12:14:17.680143 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 12:14:17.690657 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 24 12:14:17.700027 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 24 12:14:17.707674 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 24 12:14:17.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:17.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:17.716561 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 24 12:14:17.725077 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 24 12:14:17.732921 kernel: loop1: detected capacity change from 0 to 224512 Jan 24 12:14:17.734360 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 24 12:14:17.742036 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 12:14:17.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:17.755414 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Jan 24 12:14:17.755683 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Jan 24 12:14:17.762513 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 12:14:17.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:17.772352 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 24 12:14:17.785033 kernel: loop2: detected capacity change from 0 to 50784 Jan 24 12:14:17.786331 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 24 12:14:17.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:17.828956 kernel: loop3: detected capacity change from 0 to 111560 Jan 24 12:14:17.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:17.831334 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 24 12:14:17.839000 audit: BPF prog-id=18 op=LOAD Jan 24 12:14:17.839000 audit: BPF prog-id=19 op=LOAD Jan 24 12:14:17.839000 audit: BPF prog-id=20 op=LOAD Jan 24 12:14:17.840920 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 24 12:14:17.847000 audit: BPF prog-id=21 op=LOAD Jan 24 12:14:17.851043 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 12:14:17.858028 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 12:14:17.864000 audit: BPF prog-id=22 op=LOAD Jan 24 12:14:17.871000 audit: BPF prog-id=23 op=LOAD Jan 24 12:14:17.871000 audit: BPF prog-id=24 op=LOAD Jan 24 12:14:17.873361 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 24 12:14:17.879000 audit: BPF prog-id=25 op=LOAD Jan 24 12:14:17.879000 audit: BPF prog-id=26 op=LOAD Jan 24 12:14:17.879000 audit: BPF prog-id=27 op=LOAD Jan 24 12:14:17.881935 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 24 12:14:17.903498 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Jan 24 12:14:17.904471 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Jan 24 12:14:17.913061 kernel: loop4: detected capacity change from 0 to 224512 Jan 24 12:14:17.913535 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 12:14:17.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:17.942923 kernel: loop5: detected capacity change from 0 to 50784 Jan 24 12:14:17.966300 kernel: loop6: detected capacity change from 0 to 111560 Jan 24 12:14:17.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:17.976696 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 24 12:14:17.978316 systemd-nsresourced[1288]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 24 12:14:17.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:17.982547 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 24 12:14:17.992123 (sd-merge)[1292]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 24 12:14:17.999201 (sd-merge)[1292]: Merged extensions into '/usr'. Jan 24 12:14:18.008576 systemd[1]: Reload requested from client PID 1263 ('systemd-sysext') (unit systemd-sysext.service)... Jan 24 12:14:18.008749 systemd[1]: Reloading... Jan 24 12:14:18.105798 systemd-resolved[1286]: Positive Trust Anchors: Jan 24 12:14:18.106434 systemd-resolved[1286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 12:14:18.106508 systemd-resolved[1286]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 24 12:14:18.106589 systemd-resolved[1286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 12:14:18.114722 systemd-resolved[1286]: Defaulting to hostname 'linux'. Jan 24 12:14:18.122966 zram_generator::config[1338]: No configuration found. Jan 24 12:14:18.128048 systemd-oomd[1285]: No swap; memory pressure usage will be degraded Jan 24 12:14:18.372769 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 24 12:14:18.373349 systemd[1]: Reloading finished in 363 ms. Jan 24 12:14:18.411636 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 12:14:18.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:18.416439 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 24 12:14:18.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:18.421639 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 24 12:14:18.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:18.428030 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 24 12:14:18.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:18.443163 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 12:14:18.465100 systemd[1]: Starting ensure-sysext.service... Jan 24 12:14:18.469469 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 12:14:18.473000 audit: BPF prog-id=8 op=UNLOAD Jan 24 12:14:18.473000 audit: BPF prog-id=7 op=UNLOAD Jan 24 12:14:18.474000 audit: BPF prog-id=28 op=LOAD Jan 24 12:14:18.487000 audit: BPF prog-id=29 op=LOAD Jan 24 12:14:18.489551 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 12:14:18.497000 audit: BPF prog-id=30 op=LOAD Jan 24 12:14:18.497000 audit: BPF prog-id=18 op=UNLOAD Jan 24 12:14:18.497000 audit: BPF prog-id=31 op=LOAD Jan 24 12:14:18.498000 audit: BPF prog-id=32 op=LOAD Jan 24 12:14:18.498000 audit: BPF prog-id=19 op=UNLOAD Jan 24 12:14:18.498000 audit: BPF prog-id=20 op=UNLOAD Jan 24 12:14:18.499000 audit: BPF prog-id=33 op=LOAD Jan 24 12:14:18.499000 audit: BPF prog-id=25 op=UNLOAD Jan 24 12:14:18.499000 audit: BPF prog-id=34 op=LOAD Jan 24 12:14:18.500000 audit: BPF prog-id=35 op=LOAD Jan 24 12:14:18.500000 audit: BPF prog-id=26 op=UNLOAD Jan 24 12:14:18.500000 audit: BPF prog-id=27 op=UNLOAD Jan 24 12:14:18.501000 audit: BPF prog-id=36 op=LOAD Jan 24 12:14:18.501000 audit: BPF prog-id=22 op=UNLOAD Jan 24 12:14:18.501000 audit: BPF prog-id=37 op=LOAD Jan 24 12:14:18.501000 audit: BPF prog-id=38 op=LOAD Jan 24 12:14:18.501000 audit: BPF prog-id=23 op=UNLOAD Jan 24 12:14:18.501000 audit: BPF prog-id=24 op=UNLOAD Jan 24 12:14:18.502000 audit: BPF prog-id=39 op=LOAD Jan 24 12:14:18.502000 audit: BPF prog-id=15 op=UNLOAD Jan 24 12:14:18.503000 audit: BPF prog-id=40 op=LOAD Jan 24 12:14:18.503000 audit: BPF prog-id=41 op=LOAD Jan 24 12:14:18.503000 audit: BPF prog-id=16 op=UNLOAD Jan 24 12:14:18.503000 audit: BPF prog-id=17 op=UNLOAD Jan 24 12:14:18.503000 audit: BPF prog-id=42 op=LOAD Jan 24 12:14:18.504000 audit: BPF prog-id=21 op=UNLOAD Jan 24 12:14:18.515422 systemd-tmpfiles[1375]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 24 12:14:18.516107 systemd-tmpfiles[1375]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 24 12:14:18.516619 systemd-tmpfiles[1375]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 24 12:14:18.519053 systemd-tmpfiles[1375]: ACLs are not supported, ignoring. Jan 24 12:14:18.519197 systemd-tmpfiles[1375]: ACLs are not supported, ignoring. Jan 24 12:14:18.529111 systemd[1]: Reload requested from client PID 1374 ('systemctl') (unit ensure-sysext.service)... Jan 24 12:14:18.529174 systemd[1]: Reloading... Jan 24 12:14:18.530661 systemd-tmpfiles[1375]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 12:14:18.530720 systemd-tmpfiles[1375]: Skipping /boot Jan 24 12:14:18.549642 systemd-tmpfiles[1375]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 12:14:18.549708 systemd-tmpfiles[1375]: Skipping /boot Jan 24 12:14:18.557189 systemd-udevd[1376]: Using default interface naming scheme 'v257'. Jan 24 12:14:18.615960 zram_generator::config[1408]: No configuration found. Jan 24 12:14:18.746918 kernel: mousedev: PS/2 mouse device common for all mice Jan 24 12:14:18.756892 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 24 12:14:18.763915 kernel: ACPI: button: Power Button [PWRF] Jan 24 12:14:18.778564 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 24 12:14:18.779033 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 24 12:14:19.020574 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 24 12:14:19.021041 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 24 12:14:19.027477 systemd[1]: Reloading finished in 497 ms. Jan 24 12:14:19.523790 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 12:14:19.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:19.541000 audit: BPF prog-id=43 op=LOAD Jan 24 12:14:19.541000 audit: BPF prog-id=39 op=UNLOAD Jan 24 12:14:19.542000 audit: BPF prog-id=44 op=LOAD Jan 24 12:14:19.542000 audit: BPF prog-id=45 op=LOAD Jan 24 12:14:19.543000 audit: BPF prog-id=40 op=UNLOAD Jan 24 12:14:19.543000 audit: BPF prog-id=41 op=UNLOAD Jan 24 12:14:19.545000 audit: BPF prog-id=46 op=LOAD Jan 24 12:14:19.545000 audit: BPF prog-id=36 op=UNLOAD Jan 24 12:14:19.546000 audit: BPF prog-id=47 op=LOAD Jan 24 12:14:19.546000 audit: BPF prog-id=48 op=LOAD Jan 24 12:14:19.546000 audit: BPF prog-id=37 op=UNLOAD Jan 24 12:14:19.546000 audit: BPF prog-id=38 op=UNLOAD Jan 24 12:14:19.549000 audit: BPF prog-id=49 op=LOAD Jan 24 12:14:19.549000 audit: BPF prog-id=42 op=UNLOAD Jan 24 12:14:19.553000 audit: BPF prog-id=50 op=LOAD Jan 24 12:14:19.553000 audit: BPF prog-id=33 op=UNLOAD Jan 24 12:14:19.553000 audit: BPF prog-id=51 op=LOAD Jan 24 12:14:19.553000 audit: BPF prog-id=52 op=LOAD Jan 24 12:14:19.553000 audit: BPF prog-id=34 op=UNLOAD Jan 24 12:14:19.553000 audit: BPF prog-id=35 op=UNLOAD Jan 24 12:14:19.565000 audit: BPF prog-id=53 op=LOAD Jan 24 12:14:19.565000 audit: BPF prog-id=54 op=LOAD Jan 24 12:14:19.565000 audit: BPF prog-id=28 op=UNLOAD Jan 24 12:14:19.565000 audit: BPF prog-id=29 op=UNLOAD Jan 24 12:14:19.569000 audit: BPF prog-id=55 op=LOAD Jan 24 12:14:19.586000 audit: BPF prog-id=30 op=UNLOAD Jan 24 12:14:19.586000 audit: BPF prog-id=56 op=LOAD Jan 24 12:14:19.586000 audit: BPF prog-id=57 op=LOAD Jan 24 12:14:19.586000 audit: BPF prog-id=31 op=UNLOAD Jan 24 12:14:19.586000 audit: BPF prog-id=32 op=UNLOAD Jan 24 12:14:19.594620 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 12:14:19.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:19.632669 kernel: kvm_amd: TSC scaling supported Jan 24 12:14:19.632761 kernel: kvm_amd: Nested Virtualization enabled Jan 24 12:14:19.632783 kernel: kvm_amd: Nested Paging enabled Jan 24 12:14:19.639561 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 24 12:14:19.639632 kernel: kvm_amd: PMU virtualization is disabled Jan 24 12:14:19.735348 systemd[1]: Finished ensure-sysext.service. Jan 24 12:14:19.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:19.771974 kernel: EDAC MC: Ver: 3.0.0 Jan 24 12:14:19.774681 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 12:14:19.777153 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 24 12:14:19.783698 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 24 12:14:19.791268 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 12:14:19.806790 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 12:14:19.816190 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 12:14:19.825390 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 12:14:19.833511 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 12:14:19.839198 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 12:14:19.839390 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 24 12:14:19.842661 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 24 12:14:19.855731 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 24 12:14:19.864533 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 24 12:14:19.872173 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 24 12:14:19.891000 audit: BPF prog-id=58 op=LOAD Jan 24 12:14:19.894176 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 12:14:19.901000 audit: BPF prog-id=59 op=LOAD Jan 24 12:14:19.904552 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 24 12:14:19.916128 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 24 12:14:19.936234 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 12:14:19.945000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 24 12:14:19.946979 augenrules[1523]: No rules Jan 24 12:14:19.945000 audit[1523]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe840768e0 a2=420 a3=0 items=0 ppid=1490 pid=1523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:19.945000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 24 12:14:19.953486 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 12:14:19.957743 systemd[1]: audit-rules.service: Deactivated successfully. Jan 24 12:14:19.958275 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 24 12:14:19.965231 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 12:14:19.965471 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 12:14:19.966383 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 12:14:19.966597 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 12:14:19.973176 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 24 12:14:19.973683 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 12:14:19.974127 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 12:14:19.974738 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 12:14:19.975072 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 12:14:19.975735 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 24 12:14:19.982150 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 24 12:14:20.019796 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 24 12:14:20.024761 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 12:14:20.025075 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 12:14:20.025265 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 24 12:14:20.123711 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 24 12:14:20.126186 systemd-networkd[1516]: lo: Link UP Jan 24 12:14:20.126248 systemd-networkd[1516]: lo: Gained carrier Jan 24 12:14:20.131568 systemd-networkd[1516]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 24 12:14:20.131581 systemd-networkd[1516]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 12:14:20.134146 systemd-networkd[1516]: eth0: Link UP Jan 24 12:14:20.135727 systemd-networkd[1516]: eth0: Gained carrier Jan 24 12:14:20.135798 systemd-networkd[1516]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 24 12:14:20.177447 systemd-networkd[1516]: eth0: DHCPv4 address 10.0.0.151/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 24 12:14:20.179154 systemd-timesyncd[1518]: Network configuration changed, trying to establish connection. Jan 24 12:14:21.554500 systemd-timesyncd[1518]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 24 12:14:21.554623 systemd-timesyncd[1518]: Initial clock synchronization to Sat 2026-01-24 12:14:21.554394 UTC. Jan 24 12:14:21.554995 systemd-resolved[1286]: Clock change detected. Flushing caches. Jan 24 12:14:21.813786 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 12:14:21.825025 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 12:14:21.833925 systemd[1]: Reached target network.target - Network. Jan 24 12:14:21.838229 systemd[1]: Reached target time-set.target - System Time Set. Jan 24 12:14:21.845511 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 24 12:14:21.856354 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 24 12:14:21.891051 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 24 12:14:21.948866 ldconfig[1502]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 24 12:14:21.956745 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 24 12:14:21.966430 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 24 12:14:22.009711 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 24 12:14:22.018425 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 12:14:22.025588 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 24 12:14:22.033447 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 24 12:14:22.041478 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 24 12:14:22.048986 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 24 12:14:22.055864 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 24 12:14:22.066355 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 24 12:14:22.116885 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 24 12:14:22.160211 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 24 12:14:22.208317 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 24 12:14:22.208409 systemd[1]: Reached target paths.target - Path Units. Jan 24 12:14:22.243321 systemd[1]: Reached target timers.target - Timer Units. Jan 24 12:14:22.279698 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 24 12:14:22.323853 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 24 12:14:22.366843 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 24 12:14:22.394752 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 24 12:14:22.399840 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 24 12:14:22.408517 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 24 12:14:22.413810 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 24 12:14:22.420578 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 24 12:14:22.426312 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 12:14:22.430979 systemd[1]: Reached target basic.target - Basic System. Jan 24 12:14:22.435553 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 24 12:14:22.435637 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 24 12:14:22.437640 systemd[1]: Starting containerd.service - containerd container runtime... Jan 24 12:14:22.445535 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 24 12:14:22.452343 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 24 12:14:22.459916 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 24 12:14:22.466876 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 24 12:14:22.471429 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 24 12:14:22.473229 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 24 12:14:22.473499 jq[1558]: false Jan 24 12:14:22.479370 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 24 12:14:22.487222 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 24 12:14:22.496673 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 24 12:14:22.502928 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 24 12:14:22.507957 extend-filesystems[1559]: Found /dev/vda6 Jan 24 12:14:22.514390 extend-filesystems[1559]: Found /dev/vda9 Jan 24 12:14:22.511897 oslogin_cache_refresh[1560]: Refreshing passwd entry cache Jan 24 12:14:22.518414 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Refreshing passwd entry cache Jan 24 12:14:22.518606 extend-filesystems[1559]: Checking size of /dev/vda9 Jan 24 12:14:22.528030 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 24 12:14:22.533905 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 24 12:14:22.534573 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 24 12:14:22.535337 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Failure getting users, quitting Jan 24 12:14:22.535333 oslogin_cache_refresh[1560]: Failure getting users, quitting Jan 24 12:14:22.535626 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 24 12:14:22.535626 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Refreshing group entry cache Jan 24 12:14:22.535364 oslogin_cache_refresh[1560]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 24 12:14:22.535416 oslogin_cache_refresh[1560]: Refreshing group entry cache Jan 24 12:14:22.536383 extend-filesystems[1559]: Resized partition /dev/vda9 Jan 24 12:14:22.541377 extend-filesystems[1582]: resize2fs 1.47.3 (8-Jul-2025) Jan 24 12:14:22.544476 systemd[1]: Starting update-engine.service - Update Engine... Jan 24 12:14:22.556728 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 24 12:14:22.564225 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Failure getting groups, quitting Jan 24 12:14:22.564225 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 24 12:14:22.563235 oslogin_cache_refresh[1560]: Failure getting groups, quitting Jan 24 12:14:22.563255 oslogin_cache_refresh[1560]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 24 12:14:22.565338 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 24 12:14:22.584898 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 24 12:14:22.594921 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 24 12:14:22.595481 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 24 12:14:22.596437 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 24 12:14:22.604784 jq[1585]: true Jan 24 12:14:22.596731 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 24 12:14:22.604782 systemd[1]: motdgen.service: Deactivated successfully. Jan 24 12:14:22.605570 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 24 12:14:22.622476 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 24 12:14:22.622551 update_engine[1581]: I20260124 12:14:22.619736 1581 main.cc:92] Flatcar Update Engine starting Jan 24 12:14:22.623810 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 24 12:14:22.626464 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 24 12:14:22.642613 extend-filesystems[1582]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 24 12:14:22.642613 extend-filesystems[1582]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 24 12:14:22.642613 extend-filesystems[1582]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 24 12:14:22.663611 extend-filesystems[1559]: Resized filesystem in /dev/vda9 Jan 24 12:14:22.670256 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 24 12:14:22.670686 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 24 12:14:22.682434 jq[1596]: true Jan 24 12:14:22.735610 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 24 12:14:22.737476 tar[1593]: linux-amd64/LICENSE Jan 24 12:14:22.737476 tar[1593]: linux-amd64/helm Jan 24 12:14:22.738051 systemd-logind[1577]: Watching system buttons on /dev/input/event2 (Power Button) Jan 24 12:14:22.738230 systemd-logind[1577]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 24 12:14:22.746564 systemd-logind[1577]: New seat seat0. Jan 24 12:14:22.748428 systemd[1]: Started systemd-logind.service - User Login Management. Jan 24 12:14:22.794010 bash[1627]: Updated "/home/core/.ssh/authorized_keys" Jan 24 12:14:22.796464 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 24 12:14:22.799795 dbus-daemon[1556]: [system] SELinux support is enabled Jan 24 12:14:22.806069 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 24 12:14:22.815877 update_engine[1581]: I20260124 12:14:22.815768 1581 update_check_scheduler.cc:74] Next update check in 7m25s Jan 24 12:14:22.820465 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 24 12:14:22.821503 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 24 12:14:22.821592 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 24 12:14:22.822799 dbus-daemon[1556]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 24 12:14:22.830996 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 24 12:14:22.831055 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 24 12:14:22.839505 systemd[1]: Started update-engine.service - Update Engine. Jan 24 12:14:22.853630 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 24 12:14:22.913609 sshd_keygen[1584]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 24 12:14:22.932765 locksmithd[1635]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 24 12:14:22.958864 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 24 12:14:22.970770 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 24 12:14:22.978007 systemd[1]: Started sshd@0-10.0.0.151:22-10.0.0.1:38182.service - OpenSSH per-connection server daemon (10.0.0.1:38182). Jan 24 12:14:22.991644 containerd[1599]: time="2026-01-24T12:14:22Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 24 12:14:22.993973 containerd[1599]: time="2026-01-24T12:14:22.993932778Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 24 12:14:23.014614 systemd[1]: issuegen.service: Deactivated successfully. Jan 24 12:14:23.015332 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 24 12:14:23.021789 containerd[1599]: time="2026-01-24T12:14:23.021644474Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.637µs" Jan 24 12:14:23.021789 containerd[1599]: time="2026-01-24T12:14:23.021745292Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 24 12:14:23.021868 containerd[1599]: time="2026-01-24T12:14:23.021795816Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 24 12:14:23.021868 containerd[1599]: time="2026-01-24T12:14:23.021821494Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 24 12:14:23.026628 containerd[1599]: time="2026-01-24T12:14:23.026525987Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 24 12:14:23.026628 containerd[1599]: time="2026-01-24T12:14:23.026593704Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 24 12:14:23.027470 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 24 12:14:23.027780 containerd[1599]: time="2026-01-24T12:14:23.027730967Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 24 12:14:23.027780 containerd[1599]: time="2026-01-24T12:14:23.027751025Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 24 12:14:23.028246 containerd[1599]: time="2026-01-24T12:14:23.027966066Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 24 12:14:23.028246 containerd[1599]: time="2026-01-24T12:14:23.028030767Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 24 12:14:23.028246 containerd[1599]: time="2026-01-24T12:14:23.028044182Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 24 12:14:23.028246 containerd[1599]: time="2026-01-24T12:14:23.028059300Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 24 12:14:23.028832 containerd[1599]: time="2026-01-24T12:14:23.028696740Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 24 12:14:23.028832 containerd[1599]: time="2026-01-24T12:14:23.028759577Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 24 12:14:23.028896 containerd[1599]: time="2026-01-24T12:14:23.028853552Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 24 12:14:23.029211 containerd[1599]: time="2026-01-24T12:14:23.029062273Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 24 12:14:23.029713 containerd[1599]: time="2026-01-24T12:14:23.029662193Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 24 12:14:23.029713 containerd[1599]: time="2026-01-24T12:14:23.029709270Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 24 12:14:23.029833 containerd[1599]: time="2026-01-24T12:14:23.029779722Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 24 12:14:23.031351 containerd[1599]: time="2026-01-24T12:14:23.030810256Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 24 12:14:23.031351 containerd[1599]: time="2026-01-24T12:14:23.030885677Z" level=info msg="metadata content store policy set" policy=shared Jan 24 12:14:23.042990 containerd[1599]: time="2026-01-24T12:14:23.042539103Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 24 12:14:23.042990 containerd[1599]: time="2026-01-24T12:14:23.042611278Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 24 12:14:23.042990 containerd[1599]: time="2026-01-24T12:14:23.042703911Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 24 12:14:23.042990 containerd[1599]: time="2026-01-24T12:14:23.042724209Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 24 12:14:23.042990 containerd[1599]: time="2026-01-24T12:14:23.042739306Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 24 12:14:23.042990 containerd[1599]: time="2026-01-24T12:14:23.042753994Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 24 12:14:23.042990 containerd[1599]: time="2026-01-24T12:14:23.042767750Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 24 12:14:23.042990 containerd[1599]: time="2026-01-24T12:14:23.042780624Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 24 12:14:23.042990 containerd[1599]: time="2026-01-24T12:14:23.042795562Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 24 12:14:23.042990 containerd[1599]: time="2026-01-24T12:14:23.042810700Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 24 12:14:23.042990 containerd[1599]: time="2026-01-24T12:14:23.042823624Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 24 12:14:23.042990 containerd[1599]: time="2026-01-24T12:14:23.042836207Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 24 12:14:23.042990 containerd[1599]: time="2026-01-24T12:14:23.042850174Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 24 12:14:23.042990 containerd[1599]: time="2026-01-24T12:14:23.042867466Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 24 12:14:23.043506 containerd[1599]: time="2026-01-24T12:14:23.043013399Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 24 12:14:23.043506 containerd[1599]: time="2026-01-24T12:14:23.043037955Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 24 12:14:23.043506 containerd[1599]: time="2026-01-24T12:14:23.043054355Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 24 12:14:23.043506 containerd[1599]: time="2026-01-24T12:14:23.043067349Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 24 12:14:23.043506 containerd[1599]: time="2026-01-24T12:14:23.043170131Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 24 12:14:23.043506 containerd[1599]: time="2026-01-24T12:14:23.043196270Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 24 12:14:23.043506 containerd[1599]: time="2026-01-24T12:14:23.043218832Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 24 12:14:23.043506 containerd[1599]: time="2026-01-24T12:14:23.043233790Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 24 12:14:23.043506 containerd[1599]: time="2026-01-24T12:14:23.043247806Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 24 12:14:23.043506 containerd[1599]: time="2026-01-24T12:14:23.043330450Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 24 12:14:23.043506 containerd[1599]: time="2026-01-24T12:14:23.043347182Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 24 12:14:23.043506 containerd[1599]: time="2026-01-24T12:14:23.043372318Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 24 12:14:23.043506 containerd[1599]: time="2026-01-24T12:14:23.043421450Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 24 12:14:23.043506 containerd[1599]: time="2026-01-24T12:14:23.043438342Z" level=info msg="Start snapshots syncer" Jan 24 12:14:23.043506 containerd[1599]: time="2026-01-24T12:14:23.043467346Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 24 12:14:23.043829 containerd[1599]: time="2026-01-24T12:14:23.043768969Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 24 12:14:23.043986 containerd[1599]: time="2026-01-24T12:14:23.043830414Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 24 12:14:23.043986 containerd[1599]: time="2026-01-24T12:14:23.043879165Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 24 12:14:23.044025 containerd[1599]: time="2026-01-24T12:14:23.044001573Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 24 12:14:23.044051 containerd[1599]: time="2026-01-24T12:14:23.044031389Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 24 12:14:23.044051 containerd[1599]: time="2026-01-24T12:14:23.044045656Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 24 12:14:23.044160 containerd[1599]: time="2026-01-24T12:14:23.044058449Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 24 12:14:23.044160 containerd[1599]: time="2026-01-24T12:14:23.044071644Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 24 12:14:23.044207 containerd[1599]: time="2026-01-24T12:14:23.044181459Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 24 12:14:23.044207 containerd[1599]: time="2026-01-24T12:14:23.044199884Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 24 12:14:23.044245 containerd[1599]: time="2026-01-24T12:14:23.044212597Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 24 12:14:23.044245 containerd[1599]: time="2026-01-24T12:14:23.044226053Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 24 12:14:23.044339 containerd[1599]: time="2026-01-24T12:14:23.044319447Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 24 12:14:23.044359 containerd[1599]: time="2026-01-24T12:14:23.044338743Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 24 12:14:23.044359 containerd[1599]: time="2026-01-24T12:14:23.044350645Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 24 12:14:23.044393 containerd[1599]: time="2026-01-24T12:14:23.044363519Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 24 12:14:23.044393 containerd[1599]: time="2026-01-24T12:14:23.044377135Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 24 12:14:23.044432 containerd[1599]: time="2026-01-24T12:14:23.044403484Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 24 12:14:23.044432 containerd[1599]: time="2026-01-24T12:14:23.044423271Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 24 12:14:23.044464 containerd[1599]: time="2026-01-24T12:14:23.044441906Z" level=info msg="runtime interface created" Jan 24 12:14:23.044464 containerd[1599]: time="2026-01-24T12:14:23.044450181Z" level=info msg="created NRI interface" Jan 24 12:14:23.044496 containerd[1599]: time="2026-01-24T12:14:23.044464808Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 24 12:14:23.044496 containerd[1599]: time="2026-01-24T12:14:23.044479095Z" level=info msg="Connect containerd service" Jan 24 12:14:23.044533 containerd[1599]: time="2026-01-24T12:14:23.044504041Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 24 12:14:23.046173 containerd[1599]: time="2026-01-24T12:14:23.045871798Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 12:14:23.061478 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 24 12:14:23.074681 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 24 12:14:23.085704 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 24 12:14:23.093073 systemd[1]: Reached target getty.target - Login Prompts. Jan 24 12:14:23.141354 tar[1593]: linux-amd64/README.md Jan 24 12:14:23.142233 sshd[1650]: Accepted publickey for core from 10.0.0.1 port 38182 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 12:14:23.145712 sshd-session[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 12:14:23.161726 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 24 12:14:23.169945 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 24 12:14:23.178342 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 24 12:14:23.192882 containerd[1599]: time="2026-01-24T12:14:23.192849516Z" level=info msg="Start subscribing containerd event" Jan 24 12:14:23.194227 containerd[1599]: time="2026-01-24T12:14:23.193256366Z" level=info msg="Start recovering state" Jan 24 12:14:23.194227 containerd[1599]: time="2026-01-24T12:14:23.193442332Z" level=info msg="Start event monitor" Jan 24 12:14:23.194227 containerd[1599]: time="2026-01-24T12:14:23.193467049Z" level=info msg="Start cni network conf syncer for default" Jan 24 12:14:23.194227 containerd[1599]: time="2026-01-24T12:14:23.193479121Z" level=info msg="Start streaming server" Jan 24 12:14:23.194227 containerd[1599]: time="2026-01-24T12:14:23.193489491Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 24 12:14:23.194227 containerd[1599]: time="2026-01-24T12:14:23.193498658Z" level=info msg="runtime interface starting up..." Jan 24 12:14:23.194227 containerd[1599]: time="2026-01-24T12:14:23.193506893Z" level=info msg="starting plugins..." Jan 24 12:14:23.194227 containerd[1599]: time="2026-01-24T12:14:23.193523665Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 24 12:14:23.194227 containerd[1599]: time="2026-01-24T12:14:23.193214978Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 24 12:14:23.194227 containerd[1599]: time="2026-01-24T12:14:23.193693802Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 24 12:14:23.194227 containerd[1599]: time="2026-01-24T12:14:23.193757732Z" level=info msg="containerd successfully booted in 0.203766s" Jan 24 12:14:23.194252 systemd[1]: Started containerd.service - containerd container runtime. Jan 24 12:14:23.206318 systemd-logind[1577]: New session 1 of user core. Jan 24 12:14:23.221914 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 24 12:14:23.234224 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 24 12:14:23.266435 (systemd)[1681]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 24 12:14:23.271816 systemd-logind[1577]: New session 2 of user core. Jan 24 12:14:23.317459 systemd-networkd[1516]: eth0: Gained IPv6LL Jan 24 12:14:23.322002 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 24 12:14:23.330787 systemd[1]: Reached target network-online.target - Network is Online. Jan 24 12:14:23.338842 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 24 12:14:23.346850 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 12:14:23.369976 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 24 12:14:23.415708 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 24 12:14:23.416528 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 24 12:14:23.427689 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 24 12:14:23.432341 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 24 12:14:23.453961 systemd[1681]: Queued start job for default target default.target. Jan 24 12:14:23.468493 systemd[1681]: Created slice app.slice - User Application Slice. Jan 24 12:14:23.468584 systemd[1681]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 24 12:14:23.468604 systemd[1681]: Reached target paths.target - Paths. Jan 24 12:14:23.468723 systemd[1681]: Reached target timers.target - Timers. Jan 24 12:14:23.471060 systemd[1681]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 24 12:14:23.472626 systemd[1681]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 24 12:14:23.489425 systemd[1681]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 24 12:14:23.489558 systemd[1681]: Reached target sockets.target - Sockets. Jan 24 12:14:23.492439 systemd[1681]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 24 12:14:23.492633 systemd[1681]: Reached target basic.target - Basic System. Jan 24 12:14:23.492805 systemd[1681]: Reached target default.target - Main User Target. Jan 24 12:14:23.492892 systemd[1681]: Startup finished in 212ms. Jan 24 12:14:23.492983 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 24 12:14:23.518636 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 24 12:14:23.561652 systemd[1]: Started sshd@1-10.0.0.151:22-10.0.0.1:50522.service - OpenSSH per-connection server daemon (10.0.0.1:50522). Jan 24 12:14:23.643256 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 50522 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 12:14:23.646353 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 12:14:23.656250 systemd-logind[1577]: New session 3 of user core. Jan 24 12:14:23.666541 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 24 12:14:23.695581 sshd[1717]: Connection closed by 10.0.0.1 port 50522 Jan 24 12:14:23.696054 sshd-session[1713]: pam_unix(sshd:session): session closed for user core Jan 24 12:14:23.714703 systemd[1]: sshd@1-10.0.0.151:22-10.0.0.1:50522.service: Deactivated successfully. Jan 24 12:14:23.717573 systemd[1]: session-3.scope: Deactivated successfully. Jan 24 12:14:23.718936 systemd-logind[1577]: Session 3 logged out. Waiting for processes to exit. Jan 24 12:14:23.722818 systemd[1]: Started sshd@2-10.0.0.151:22-10.0.0.1:50536.service - OpenSSH per-connection server daemon (10.0.0.1:50536). Jan 24 12:14:23.733528 systemd-logind[1577]: Removed session 3. Jan 24 12:14:23.803249 sshd[1723]: Accepted publickey for core from 10.0.0.1 port 50536 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 12:14:23.805639 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 12:14:23.814951 systemd-logind[1577]: New session 4 of user core. Jan 24 12:14:23.822459 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 24 12:14:23.850799 sshd[1727]: Connection closed by 10.0.0.1 port 50536 Jan 24 12:14:23.851379 sshd-session[1723]: pam_unix(sshd:session): session closed for user core Jan 24 12:14:23.856685 systemd[1]: sshd@2-10.0.0.151:22-10.0.0.1:50536.service: Deactivated successfully. Jan 24 12:14:23.859664 systemd[1]: session-4.scope: Deactivated successfully. Jan 24 12:14:23.862068 systemd-logind[1577]: Session 4 logged out. Waiting for processes to exit. Jan 24 12:14:23.864777 systemd-logind[1577]: Removed session 4. Jan 24 12:14:24.395776 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 12:14:24.402622 (kubelet)[1737]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 12:14:24.403351 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 24 12:14:24.411218 systemd[1]: Startup finished in 5.608s (kernel) + 11.639s (initrd) + 7.994s (userspace) = 25.243s. Jan 24 12:14:24.986234 kubelet[1737]: E0124 12:14:24.985939 1737 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 12:14:24.989518 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 12:14:24.989812 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 12:14:24.990635 systemd[1]: kubelet.service: Consumed 1.096s CPU time, 265.1M memory peak. Jan 24 12:14:33.870562 systemd[1]: Started sshd@3-10.0.0.151:22-10.0.0.1:37440.service - OpenSSH per-connection server daemon (10.0.0.1:37440). Jan 24 12:14:33.969258 sshd[1751]: Accepted publickey for core from 10.0.0.1 port 37440 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 12:14:33.972020 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 12:14:33.986797 systemd-logind[1577]: New session 5 of user core. Jan 24 12:14:34.001612 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 24 12:14:34.051649 sshd[1755]: Connection closed by 10.0.0.1 port 37440 Jan 24 12:14:34.054542 sshd-session[1751]: pam_unix(sshd:session): session closed for user core Jan 24 12:14:34.066065 systemd[1]: sshd@3-10.0.0.151:22-10.0.0.1:37440.service: Deactivated successfully. Jan 24 12:14:34.068878 systemd[1]: session-5.scope: Deactivated successfully. Jan 24 12:14:34.071597 systemd-logind[1577]: Session 5 logged out. Waiting for processes to exit. Jan 24 12:14:34.074758 systemd[1]: Started sshd@4-10.0.0.151:22-10.0.0.1:37456.service - OpenSSH per-connection server daemon (10.0.0.1:37456). Jan 24 12:14:34.076389 systemd-logind[1577]: Removed session 5. Jan 24 12:14:34.165755 sshd[1761]: Accepted publickey for core from 10.0.0.1 port 37456 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 12:14:34.168787 sshd-session[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 12:14:34.177972 systemd-logind[1577]: New session 6 of user core. Jan 24 12:14:34.191628 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 24 12:14:34.210013 sshd[1766]: Connection closed by 10.0.0.1 port 37456 Jan 24 12:14:34.210641 sshd-session[1761]: pam_unix(sshd:session): session closed for user core Jan 24 12:14:34.219749 systemd[1]: sshd@4-10.0.0.151:22-10.0.0.1:37456.service: Deactivated successfully. Jan 24 12:14:34.222863 systemd[1]: session-6.scope: Deactivated successfully. Jan 24 12:14:34.225580 systemd-logind[1577]: Session 6 logged out. Waiting for processes to exit. Jan 24 12:14:34.229477 systemd[1]: Started sshd@5-10.0.0.151:22-10.0.0.1:37472.service - OpenSSH per-connection server daemon (10.0.0.1:37472). Jan 24 12:14:34.231824 systemd-logind[1577]: Removed session 6. Jan 24 12:14:34.318266 sshd[1772]: Accepted publickey for core from 10.0.0.1 port 37472 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 12:14:34.320735 sshd-session[1772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 12:14:34.330700 systemd-logind[1577]: New session 7 of user core. Jan 24 12:14:34.348498 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 24 12:14:34.377864 sshd[1776]: Connection closed by 10.0.0.1 port 37472 Jan 24 12:14:34.378703 sshd-session[1772]: pam_unix(sshd:session): session closed for user core Jan 24 12:14:34.392315 systemd[1]: sshd@5-10.0.0.151:22-10.0.0.1:37472.service: Deactivated successfully. Jan 24 12:14:34.394705 systemd[1]: session-7.scope: Deactivated successfully. Jan 24 12:14:34.396190 systemd-logind[1577]: Session 7 logged out. Waiting for processes to exit. Jan 24 12:14:34.399851 systemd[1]: Started sshd@6-10.0.0.151:22-10.0.0.1:37480.service - OpenSSH per-connection server daemon (10.0.0.1:37480). Jan 24 12:14:34.401585 systemd-logind[1577]: Removed session 7. Jan 24 12:14:34.483919 sshd[1782]: Accepted publickey for core from 10.0.0.1 port 37480 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 12:14:34.486692 sshd-session[1782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 12:14:34.495396 systemd-logind[1577]: New session 8 of user core. Jan 24 12:14:34.505539 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 24 12:14:34.543521 sudo[1788]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 24 12:14:34.544029 sudo[1788]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 12:14:34.562008 sudo[1788]: pam_unix(sudo:session): session closed for user root Jan 24 12:14:34.564855 sshd[1787]: Connection closed by 10.0.0.1 port 37480 Jan 24 12:14:34.565411 sshd-session[1782]: pam_unix(sshd:session): session closed for user core Jan 24 12:14:34.578633 systemd[1]: sshd@6-10.0.0.151:22-10.0.0.1:37480.service: Deactivated successfully. Jan 24 12:14:34.581914 systemd[1]: session-8.scope: Deactivated successfully. Jan 24 12:14:34.583841 systemd-logind[1577]: Session 8 logged out. Waiting for processes to exit. Jan 24 12:14:34.588021 systemd[1]: Started sshd@7-10.0.0.151:22-10.0.0.1:37488.service - OpenSSH per-connection server daemon (10.0.0.1:37488). Jan 24 12:14:34.589431 systemd-logind[1577]: Removed session 8. Jan 24 12:14:34.675478 sshd[1795]: Accepted publickey for core from 10.0.0.1 port 37488 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 12:14:34.678235 sshd-session[1795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 12:14:34.686541 systemd-logind[1577]: New session 9 of user core. Jan 24 12:14:34.693557 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 24 12:14:34.720391 sudo[1801]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 24 12:14:34.720904 sudo[1801]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 12:14:34.726986 sudo[1801]: pam_unix(sudo:session): session closed for user root Jan 24 12:14:34.739179 sudo[1800]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 24 12:14:34.739630 sudo[1800]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 12:14:34.751383 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 24 12:14:34.837000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 24 12:14:34.838917 augenrules[1825]: No rules Jan 24 12:14:34.840182 systemd[1]: audit-rules.service: Deactivated successfully. Jan 24 12:14:34.840660 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 24 12:14:34.844206 kernel: kauditd_printk_skb: 126 callbacks suppressed Jan 24 12:14:34.844271 kernel: audit: type=1305 audit(1769256874.837:218): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 24 12:14:34.848691 sudo[1800]: pam_unix(sudo:session): session closed for user root Jan 24 12:14:34.852561 sshd[1799]: Connection closed by 10.0.0.1 port 37488 Jan 24 12:14:34.837000 audit[1825]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc936d5630 a2=420 a3=0 items=0 ppid=1806 pid=1825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:34.852854 sshd-session[1795]: pam_unix(sshd:session): session closed for user core Jan 24 12:14:34.873790 kernel: audit: type=1300 audit(1769256874.837:218): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc936d5630 a2=420 a3=0 items=0 ppid=1806 pid=1825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:34.837000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 24 12:14:34.883485 kernel: audit: type=1327 audit(1769256874.837:218): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 24 12:14:34.883540 kernel: audit: type=1130 audit(1769256874.839:219): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:34.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:34.897449 kernel: audit: type=1131 audit(1769256874.839:220): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:34.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:34.847000 audit[1800]: USER_END pid=1800 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 12:14:34.924805 kernel: audit: type=1106 audit(1769256874.847:221): pid=1800 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 12:14:34.924919 kernel: audit: type=1104 audit(1769256874.848:222): pid=1800 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 12:14:34.848000 audit[1800]: CRED_DISP pid=1800 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 12:14:34.854000 audit[1795]: USER_END pid=1795 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:14:34.962252 kernel: audit: type=1106 audit(1769256874.854:223): pid=1795 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:14:34.962409 kernel: audit: type=1104 audit(1769256874.855:224): pid=1795 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:14:34.855000 audit[1795]: CRED_DISP pid=1795 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:14:34.984315 systemd[1]: sshd@7-10.0.0.151:22-10.0.0.1:37488.service: Deactivated successfully. Jan 24 12:14:34.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.151:22-10.0.0.1:37488 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:34.986896 systemd[1]: session-9.scope: Deactivated successfully. Jan 24 12:14:34.988490 systemd-logind[1577]: Session 9 logged out. Waiting for processes to exit. Jan 24 12:14:34.991737 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 24 12:14:34.992732 systemd-logind[1577]: Removed session 9. Jan 24 12:14:34.994756 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 12:14:34.998598 systemd[1]: Started sshd@8-10.0.0.151:22-10.0.0.1:37498.service - OpenSSH per-connection server daemon (10.0.0.1:37498). Jan 24 12:14:34.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.151:22-10.0.0.1:37498 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:35.002216 kernel: audit: type=1131 audit(1769256874.983:225): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.151:22-10.0.0.1:37488 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:35.080000 audit[1835]: USER_ACCT pid=1835 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:14:35.082036 sshd[1835]: Accepted publickey for core from 10.0.0.1 port 37498 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 12:14:35.084889 sshd-session[1835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 12:14:35.082000 audit[1835]: CRED_ACQ pid=1835 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:14:35.082000 audit[1835]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcb3854700 a2=3 a3=0 items=0 ppid=1 pid=1835 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:35.082000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:14:35.093603 systemd-logind[1577]: New session 10 of user core. Jan 24 12:14:35.107615 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 24 12:14:35.111000 audit[1835]: USER_START pid=1835 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:14:35.115000 audit[1841]: CRED_ACQ pid=1841 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:14:35.135735 sudo[1842]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 24 12:14:35.136478 sudo[1842]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 12:14:35.134000 audit[1842]: USER_ACCT pid=1842 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 12:14:35.134000 audit[1842]: CRED_REFR pid=1842 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 12:14:35.135000 audit[1842]: USER_START pid=1842 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 12:14:35.252546 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 12:14:35.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:35.269675 (kubelet)[1858]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 12:14:35.352213 kubelet[1858]: E0124 12:14:35.351698 1858 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 12:14:35.357565 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 12:14:35.357858 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 12:14:35.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 12:14:35.358804 systemd[1]: kubelet.service: Consumed 304ms CPU time, 108.9M memory peak. Jan 24 12:14:35.669579 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 24 12:14:35.689727 (dockerd)[1876]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 24 12:14:36.044837 dockerd[1876]: time="2026-01-24T12:14:36.044602046Z" level=info msg="Starting up" Jan 24 12:14:36.046716 dockerd[1876]: time="2026-01-24T12:14:36.046644313Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 24 12:14:36.071175 dockerd[1876]: time="2026-01-24T12:14:36.070932513Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 24 12:14:36.231849 dockerd[1876]: time="2026-01-24T12:14:36.231656467Z" level=info msg="Loading containers: start." Jan 24 12:14:36.253323 kernel: Initializing XFRM netlink socket Jan 24 12:14:36.416000 audit[1929]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1929 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:14:36.416000 audit[1929]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffdddc7ae10 a2=0 a3=0 items=0 ppid=1876 pid=1929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:36.416000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 24 12:14:36.423000 audit[1931]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1931 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:14:36.423000 audit[1931]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffdb2c789b0 a2=0 a3=0 items=0 ppid=1876 pid=1931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:36.423000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 24 12:14:36.433000 audit[1933]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1933 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:14:36.433000 audit[1933]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffebfbabf30 a2=0 a3=0 items=0 ppid=1876 pid=1933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:36.433000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 24 12:14:36.439000 audit[1935]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1935 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:14:36.439000 audit[1935]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd0d17270 a2=0 a3=0 items=0 ppid=1876 pid=1935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:36.439000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 24 12:14:36.447000 audit[1937]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1937 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:14:36.447000 audit[1937]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffea54baed0 a2=0 a3=0 items=0 ppid=1876 pid=1937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:36.447000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 24 12:14:36.453000 audit[1939]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1939 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:14:36.453000 audit[1939]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffea7640d90 a2=0 a3=0 items=0 ppid=1876 pid=1939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:36.453000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 24 12:14:36.460000 audit[1941]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1941 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:14:36.460000 audit[1941]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe34160f40 a2=0 a3=0 items=0 ppid=1876 pid=1941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:36.460000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 24 12:14:36.468000 audit[1943]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1943 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:14:36.468000 audit[1943]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffff09d5210 a2=0 a3=0 items=0 ppid=1876 pid=1943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:36.468000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 24 12:14:36.525000 audit[1946]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1946 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:14:36.525000 audit[1946]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffc8e850b00 a2=0 a3=0 items=0 ppid=1876 pid=1946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:36.525000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 24 12:14:36.532000 audit[1948]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1948 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:14:36.532000 audit[1948]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffec8b73350 a2=0 a3=0 items=0 ppid=1876 pid=1948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:36.532000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 24 12:14:36.541000 audit[1950]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1950 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:14:36.541000 audit[1950]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffd2d570590 a2=0 a3=0 items=0 ppid=1876 pid=1950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:36.541000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 24 12:14:36.549000 audit[1952]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1952 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:14:36.549000 audit[1952]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffdf2950a90 a2=0 a3=0 items=0 ppid=1876 pid=1952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:36.549000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 24 12:14:36.556000 audit[1954]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1954 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:14:36.556000 audit[1954]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffca7331b10 a2=0 a3=0 items=0 ppid=1876 pid=1954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:36.556000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 24 12:14:36.676000 audit[1984]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1984 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:14:36.676000 audit[1984]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff58853ca0 a2=0 a3=0 items=0 ppid=1876 pid=1984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:36.676000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 24 12:14:36.683000 audit[1986]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1986 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:14:36.683000 audit[1986]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fffcf2019e0 a2=0 a3=0 items=0 ppid=1876 pid=1986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:36.683000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 24 12:14:36.691000 audit[1988]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1988 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:14:36.691000 audit[1988]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe04653e20 a2=0 a3=0 items=0 ppid=1876 pid=1988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:36.691000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 24 12:14:36.698000 audit[1990]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1990 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:14:36.698000 audit[1990]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffa08af6d0 a2=0 a3=0 items=0 ppid=1876 pid=1990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:36.698000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 24 12:14:36.703000 audit[1992]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1992 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:14:36.703000 audit[1992]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdebb1ab10 a2=0 a3=0 items=0 ppid=1876 pid=1992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:36.703000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 24 12:14:36.709000 audit[1994]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1994 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:14:36.709000 audit[1994]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe65f63900 a2=0 a3=0 items=0 ppid=1876 pid=1994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:36.709000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 24 12:14:36.715000 audit[1996]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1996 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:14:36.715000 audit[1996]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd332a93e0 a2=0 a3=0 items=0 ppid=1876 pid=1996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:36.715000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 24 12:14:36.721000 audit[1998]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1998 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:14:36.721000 audit[1998]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffe920c3750 a2=0 a3=0 items=0 ppid=1876 pid=1998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:36.721000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 24 12:14:36.730000 audit[2000]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2000 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:14:36.730000 audit[2000]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffc5b6e9f60 a2=0 a3=0 items=0 ppid=1876 pid=2000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:36.730000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 24 12:14:36.737000 audit[2002]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2002 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:14:36.737000 audit[2002]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffeb5e55e20 a2=0 a3=0 items=0 ppid=1876 pid=2002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:36.737000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 24 12:14:36.744000 audit[2004]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2004 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:14:36.744000 audit[2004]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffd20b609e0 a2=0 a3=0 items=0 ppid=1876 pid=2004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:36.744000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 24 12:14:36.751000 audit[2006]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2006 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:14:36.751000 audit[2006]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffc291c0f50 a2=0 a3=0 items=0 ppid=1876 pid=2006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:36.751000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 24 12:14:36.758000 audit[2008]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2008 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:14:36.758000 audit[2008]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fff89856450 a2=0 a3=0 items=0 ppid=1876 pid=2008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:36.758000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 24 12:14:36.775000 audit[2013]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2013 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:14:36.775000 audit[2013]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff25e26880 a2=0 a3=0 items=0 ppid=1876 pid=2013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:36.775000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 24 12:14:36.783000 audit[2015]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2015 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:14:36.783000 audit[2015]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffe362fd980 a2=0 a3=0 items=0 ppid=1876 pid=2015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:36.783000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 24 12:14:36.788000 audit[2017]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2017 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:14:36.788000 audit[2017]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffec5a20e60 a2=0 a3=0 items=0 ppid=1876 pid=2017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:36.788000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 24 12:14:36.795000 audit[2019]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2019 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:14:36.795000 audit[2019]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd7fc81700 a2=0 a3=0 items=0 ppid=1876 pid=2019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:36.795000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 24 12:14:36.801000 audit[2021]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2021 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:14:36.801000 audit[2021]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fff59346280 a2=0 a3=0 items=0 ppid=1876 pid=2021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:36.801000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 24 12:14:36.808000 audit[2023]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2023 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:14:36.808000 audit[2023]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffda3c0a150 a2=0 a3=0 items=0 ppid=1876 pid=2023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:36.808000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 24 12:14:36.852000 audit[2028]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2028 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:14:36.852000 audit[2028]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7fff559bed20 a2=0 a3=0 items=0 ppid=1876 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:36.852000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 24 12:14:36.861000 audit[2030]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2030 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:14:36.861000 audit[2030]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fffada39900 a2=0 a3=0 items=0 ppid=1876 pid=2030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:36.861000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 24 12:14:36.894000 audit[2038]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2038 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:14:36.894000 audit[2038]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffcf3aa8640 a2=0 a3=0 items=0 ppid=1876 pid=2038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:36.894000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 24 12:14:36.926000 audit[2044]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2044 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:14:36.926000 audit[2044]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffe009790a0 a2=0 a3=0 items=0 ppid=1876 pid=2044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:36.926000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 24 12:14:36.937000 audit[2046]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2046 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:14:36.937000 audit[2046]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffe9f7be4a0 a2=0 a3=0 items=0 ppid=1876 pid=2046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:36.937000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 24 12:14:36.943000 audit[2048]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2048 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:14:36.943000 audit[2048]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fffb969dbd0 a2=0 a3=0 items=0 ppid=1876 pid=2048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:36.943000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 24 12:14:36.951000 audit[2050]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2050 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:14:36.951000 audit[2050]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffd2c609840 a2=0 a3=0 items=0 ppid=1876 pid=2050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:36.951000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 24 12:14:36.959000 audit[2052]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2052 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:14:36.959000 audit[2052]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffdfcc8cbe0 a2=0 a3=0 items=0 ppid=1876 pid=2052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:36.959000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 24 12:14:36.962009 systemd-networkd[1516]: docker0: Link UP Jan 24 12:14:36.978991 dockerd[1876]: time="2026-01-24T12:14:36.978835768Z" level=info msg="Loading containers: done." Jan 24 12:14:37.005981 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3281657418-merged.mount: Deactivated successfully. Jan 24 12:14:37.023457 dockerd[1876]: time="2026-01-24T12:14:37.023277835Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 24 12:14:37.023819 dockerd[1876]: time="2026-01-24T12:14:37.023665489Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 24 12:14:37.023898 dockerd[1876]: time="2026-01-24T12:14:37.023839975Z" level=info msg="Initializing buildkit" Jan 24 12:14:37.093192 dockerd[1876]: time="2026-01-24T12:14:37.092972189Z" level=info msg="Completed buildkit initialization" Jan 24 12:14:37.106710 dockerd[1876]: time="2026-01-24T12:14:37.106480508Z" level=info msg="Daemon has completed initialization" Jan 24 12:14:37.106859 dockerd[1876]: time="2026-01-24T12:14:37.106735466Z" level=info msg="API listen on /run/docker.sock" Jan 24 12:14:37.107043 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 24 12:14:37.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:38.108541 containerd[1599]: time="2026-01-24T12:14:38.108401384Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 24 12:14:38.809667 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount26914407.mount: Deactivated successfully. Jan 24 12:14:40.514464 containerd[1599]: time="2026-01-24T12:14:40.514294447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 12:14:40.515895 containerd[1599]: time="2026-01-24T12:14:40.515854270Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=27401903" Jan 24 12:14:40.517845 containerd[1599]: time="2026-01-24T12:14:40.517692051Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 12:14:40.521600 containerd[1599]: time="2026-01-24T12:14:40.521444837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 12:14:40.523302 containerd[1599]: time="2026-01-24T12:14:40.523206226Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 2.414636517s" Jan 24 12:14:40.523302 containerd[1599]: time="2026-01-24T12:14:40.523277900Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 24 12:14:40.524612 containerd[1599]: time="2026-01-24T12:14:40.524432571Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 24 12:14:42.455237 containerd[1599]: time="2026-01-24T12:14:42.454995529Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 12:14:42.457320 containerd[1599]: time="2026-01-24T12:14:42.457280298Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24985199" Jan 24 12:14:42.459783 containerd[1599]: time="2026-01-24T12:14:42.459260995Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 12:14:42.463042 containerd[1599]: time="2026-01-24T12:14:42.462978421Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 12:14:42.464568 containerd[1599]: time="2026-01-24T12:14:42.464487839Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 1.939865825s" Jan 24 12:14:42.464568 containerd[1599]: time="2026-01-24T12:14:42.464522855Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 24 12:14:42.465690 containerd[1599]: time="2026-01-24T12:14:42.465604869Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 24 12:14:44.143491 containerd[1599]: time="2026-01-24T12:14:44.143318374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 12:14:44.145527 containerd[1599]: time="2026-01-24T12:14:44.145433163Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19396939" Jan 24 12:14:44.147443 containerd[1599]: time="2026-01-24T12:14:44.147299961Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 12:14:44.152846 containerd[1599]: time="2026-01-24T12:14:44.152721933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 12:14:44.153928 containerd[1599]: time="2026-01-24T12:14:44.153807698Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.68815545s" Jan 24 12:14:44.153928 containerd[1599]: time="2026-01-24T12:14:44.153893297Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 24 12:14:44.155319 containerd[1599]: time="2026-01-24T12:14:44.154910199Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 24 12:14:45.337781 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2687211571.mount: Deactivated successfully. Jan 24 12:14:45.551916 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 24 12:14:45.554601 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 12:14:45.777641 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 12:14:45.788851 kernel: kauditd_printk_skb: 134 callbacks suppressed Jan 24 12:14:45.788978 kernel: audit: type=1130 audit(1769256885.776:278): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:45.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:45.807681 (kubelet)[2179]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 12:14:45.877845 kubelet[2179]: E0124 12:14:45.877794 2179 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 12:14:45.884605 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 12:14:45.884996 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 12:14:45.885958 systemd[1]: kubelet.service: Consumed 270ms CPU time, 108.7M memory peak. Jan 24 12:14:45.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 12:14:45.903233 kernel: audit: type=1131 audit(1769256885.884:279): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 12:14:46.139904 containerd[1599]: time="2026-01-24T12:14:46.139449344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 12:14:46.141645 containerd[1599]: time="2026-01-24T12:14:46.141602567Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=19572392" Jan 24 12:14:46.143580 containerd[1599]: time="2026-01-24T12:14:46.143463717Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 12:14:46.146283 containerd[1599]: time="2026-01-24T12:14:46.146202426Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 12:14:46.146860 containerd[1599]: time="2026-01-24T12:14:46.146617271Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 1.991673931s" Jan 24 12:14:46.146860 containerd[1599]: time="2026-01-24T12:14:46.146662916Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 24 12:14:46.147543 containerd[1599]: time="2026-01-24T12:14:46.147519887Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 24 12:14:46.676281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4212902459.mount: Deactivated successfully. Jan 24 12:14:48.495956 containerd[1599]: time="2026-01-24T12:14:48.495899617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 12:14:48.500182 containerd[1599]: time="2026-01-24T12:14:48.500052300Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=757173" Jan 24 12:14:48.503715 containerd[1599]: time="2026-01-24T12:14:48.503644451Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 12:14:48.511328 containerd[1599]: time="2026-01-24T12:14:48.511246308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 12:14:48.515142 containerd[1599]: time="2026-01-24T12:14:48.515013993Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.36746358s" Jan 24 12:14:48.515226 containerd[1599]: time="2026-01-24T12:14:48.515206242Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 24 12:14:48.516248 containerd[1599]: time="2026-01-24T12:14:48.515916888Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 24 12:14:48.979820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3124138064.mount: Deactivated successfully. Jan 24 12:14:48.989767 containerd[1599]: time="2026-01-24T12:14:48.989533322Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 12:14:48.991850 containerd[1599]: time="2026-01-24T12:14:48.991593918Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 24 12:14:48.993891 containerd[1599]: time="2026-01-24T12:14:48.993625501Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 12:14:48.997913 containerd[1599]: time="2026-01-24T12:14:48.997631689Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 12:14:48.998796 containerd[1599]: time="2026-01-24T12:14:48.998717693Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 482.763254ms" Jan 24 12:14:48.998796 containerd[1599]: time="2026-01-24T12:14:48.998749352Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 24 12:14:48.999989 containerd[1599]: time="2026-01-24T12:14:48.999847452Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 24 12:14:49.614875 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount342769903.mount: Deactivated successfully. Jan 24 12:14:52.160965 containerd[1599]: time="2026-01-24T12:14:52.160729053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 12:14:52.162209 containerd[1599]: time="2026-01-24T12:14:52.162168960Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=45502580" Jan 24 12:14:52.164973 containerd[1599]: time="2026-01-24T12:14:52.164802026Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 12:14:52.169738 containerd[1599]: time="2026-01-24T12:14:52.168682290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 12:14:52.170142 containerd[1599]: time="2026-01-24T12:14:52.169939930Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.170063014s" Jan 24 12:14:52.170142 containerd[1599]: time="2026-01-24T12:14:52.170011223Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 24 12:14:55.073589 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 12:14:55.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:55.073867 systemd[1]: kubelet.service: Consumed 270ms CPU time, 108.7M memory peak. Jan 24 12:14:55.077595 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 12:14:55.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:55.102620 kernel: audit: type=1130 audit(1769256895.072:280): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:55.102732 kernel: audit: type=1131 audit(1769256895.072:281): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:55.131507 systemd[1]: Reload requested from client PID 2332 ('systemctl') (unit session-10.scope)... Jan 24 12:14:55.131525 systemd[1]: Reloading... Jan 24 12:14:55.251238 zram_generator::config[2378]: No configuration found. Jan 24 12:14:55.547704 systemd[1]: Reloading finished in 415 ms. Jan 24 12:14:55.590000 audit: BPF prog-id=63 op=LOAD Jan 24 12:14:55.598286 kernel: audit: type=1334 audit(1769256895.590:282): prog-id=63 op=LOAD Jan 24 12:14:55.591000 audit: BPF prog-id=50 op=UNLOAD Jan 24 12:14:55.591000 audit: BPF prog-id=64 op=LOAD Jan 24 12:14:55.609877 kernel: audit: type=1334 audit(1769256895.591:283): prog-id=50 op=UNLOAD Jan 24 12:14:55.609928 kernel: audit: type=1334 audit(1769256895.591:284): prog-id=64 op=LOAD Jan 24 12:14:55.609963 kernel: audit: type=1334 audit(1769256895.591:285): prog-id=65 op=LOAD Jan 24 12:14:55.591000 audit: BPF prog-id=65 op=LOAD Jan 24 12:14:55.591000 audit: BPF prog-id=51 op=UNLOAD Jan 24 12:14:55.621491 kernel: audit: type=1334 audit(1769256895.591:286): prog-id=51 op=UNLOAD Jan 24 12:14:55.591000 audit: BPF prog-id=52 op=UNLOAD Jan 24 12:14:55.627375 kernel: audit: type=1334 audit(1769256895.591:287): prog-id=52 op=UNLOAD Jan 24 12:14:55.627504 kernel: audit: type=1334 audit(1769256895.592:288): prog-id=66 op=LOAD Jan 24 12:14:55.592000 audit: BPF prog-id=66 op=LOAD Jan 24 12:14:55.592000 audit: BPF prog-id=58 op=UNLOAD Jan 24 12:14:55.638943 kernel: audit: type=1334 audit(1769256895.592:289): prog-id=58 op=UNLOAD Jan 24 12:14:55.594000 audit: BPF prog-id=67 op=LOAD Jan 24 12:14:55.594000 audit: BPF prog-id=46 op=UNLOAD Jan 24 12:14:55.594000 audit: BPF prog-id=68 op=LOAD Jan 24 12:14:55.594000 audit: BPF prog-id=69 op=LOAD Jan 24 12:14:55.594000 audit: BPF prog-id=47 op=UNLOAD Jan 24 12:14:55.594000 audit: BPF prog-id=48 op=UNLOAD Jan 24 12:14:55.595000 audit: BPF prog-id=70 op=LOAD Jan 24 12:14:55.595000 audit: BPF prog-id=59 op=UNLOAD Jan 24 12:14:55.597000 audit: BPF prog-id=71 op=LOAD Jan 24 12:14:55.597000 audit: BPF prog-id=60 op=UNLOAD Jan 24 12:14:55.597000 audit: BPF prog-id=72 op=LOAD Jan 24 12:14:55.648000 audit: BPF prog-id=73 op=LOAD Jan 24 12:14:55.648000 audit: BPF prog-id=61 op=UNLOAD Jan 24 12:14:55.648000 audit: BPF prog-id=62 op=UNLOAD Jan 24 12:14:55.649000 audit: BPF prog-id=74 op=LOAD Jan 24 12:14:55.649000 audit: BPF prog-id=49 op=UNLOAD Jan 24 12:14:55.654000 audit: BPF prog-id=75 op=LOAD Jan 24 12:14:55.654000 audit: BPF prog-id=43 op=UNLOAD Jan 24 12:14:55.654000 audit: BPF prog-id=76 op=LOAD Jan 24 12:14:55.654000 audit: BPF prog-id=77 op=LOAD Jan 24 12:14:55.654000 audit: BPF prog-id=44 op=UNLOAD Jan 24 12:14:55.654000 audit: BPF prog-id=45 op=UNLOAD Jan 24 12:14:55.654000 audit: BPF prog-id=78 op=LOAD Jan 24 12:14:55.654000 audit: BPF prog-id=79 op=LOAD Jan 24 12:14:55.654000 audit: BPF prog-id=53 op=UNLOAD Jan 24 12:14:55.654000 audit: BPF prog-id=54 op=UNLOAD Jan 24 12:14:55.656000 audit: BPF prog-id=80 op=LOAD Jan 24 12:14:55.656000 audit: BPF prog-id=55 op=UNLOAD Jan 24 12:14:55.656000 audit: BPF prog-id=81 op=LOAD Jan 24 12:14:55.656000 audit: BPF prog-id=82 op=LOAD Jan 24 12:14:55.656000 audit: BPF prog-id=56 op=UNLOAD Jan 24 12:14:55.656000 audit: BPF prog-id=57 op=UNLOAD Jan 24 12:14:55.690011 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 24 12:14:55.690269 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 24 12:14:55.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 12:14:55.690780 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 12:14:55.690825 systemd[1]: kubelet.service: Consumed 181ms CPU time, 98.5M memory peak. Jan 24 12:14:55.693523 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 12:14:55.945568 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 12:14:55.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:14:55.963749 (kubelet)[2426]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 12:14:56.060980 kubelet[2426]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 12:14:56.060980 kubelet[2426]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 12:14:56.060980 kubelet[2426]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 12:14:56.061663 kubelet[2426]: I0124 12:14:56.061039 2426 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 12:14:56.402751 kubelet[2426]: I0124 12:14:56.402631 2426 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 24 12:14:56.402751 kubelet[2426]: I0124 12:14:56.402712 2426 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 12:14:56.405070 kubelet[2426]: I0124 12:14:56.404901 2426 server.go:954] "Client rotation is on, will bootstrap in background" Jan 24 12:14:56.438815 kubelet[2426]: E0124 12:14:56.438697 2426 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.151:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" Jan 24 12:14:56.440828 kubelet[2426]: I0124 12:14:56.440643 2426 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 12:14:56.450355 kubelet[2426]: I0124 12:14:56.450234 2426 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 24 12:14:56.459792 kubelet[2426]: I0124 12:14:56.459691 2426 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 12:14:56.463774 kubelet[2426]: I0124 12:14:56.463595 2426 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 12:14:56.463949 kubelet[2426]: I0124 12:14:56.463689 2426 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 12:14:56.463949 kubelet[2426]: I0124 12:14:56.463894 2426 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 12:14:56.463949 kubelet[2426]: I0124 12:14:56.463906 2426 container_manager_linux.go:304] "Creating device plugin manager" Jan 24 12:14:56.464479 kubelet[2426]: I0124 12:14:56.464045 2426 state_mem.go:36] "Initialized new in-memory state store" Jan 24 12:14:56.468944 kubelet[2426]: I0124 12:14:56.468793 2426 kubelet.go:446] "Attempting to sync node with API server" Jan 24 12:14:56.468944 kubelet[2426]: I0124 12:14:56.468860 2426 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 12:14:56.468944 kubelet[2426]: I0124 12:14:56.468890 2426 kubelet.go:352] "Adding apiserver pod source" Jan 24 12:14:56.468944 kubelet[2426]: I0124 12:14:56.468906 2426 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 12:14:56.477040 kubelet[2426]: W0124 12:14:56.476767 2426 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.151:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Jan 24 12:14:56.477040 kubelet[2426]: E0124 12:14:56.476868 2426 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.151:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" Jan 24 12:14:56.477720 kubelet[2426]: W0124 12:14:56.477608 2426 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Jan 24 12:14:56.477765 kubelet[2426]: E0124 12:14:56.477727 2426 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" Jan 24 12:14:56.477920 kubelet[2426]: I0124 12:14:56.477838 2426 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 24 12:14:56.479024 kubelet[2426]: I0124 12:14:56.478728 2426 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 24 12:14:56.479769 kubelet[2426]: W0124 12:14:56.479599 2426 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 24 12:14:56.483796 kubelet[2426]: I0124 12:14:56.483498 2426 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 12:14:56.483796 kubelet[2426]: I0124 12:14:56.483547 2426 server.go:1287] "Started kubelet" Jan 24 12:14:56.485048 kubelet[2426]: I0124 12:14:56.484865 2426 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 12:14:56.486196 kubelet[2426]: I0124 12:14:56.485857 2426 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 12:14:56.487221 kubelet[2426]: I0124 12:14:56.486813 2426 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 12:14:56.487221 kubelet[2426]: I0124 12:14:56.486816 2426 server.go:479] "Adding debug handlers to kubelet server" Jan 24 12:14:56.493653 kubelet[2426]: I0124 12:14:56.493382 2426 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 12:14:56.494044 kubelet[2426]: E0124 12:14:56.493764 2426 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 12:14:56.494929 kubelet[2426]: I0124 12:14:56.493075 2426 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 12:14:56.496726 kubelet[2426]: E0124 12:14:56.494262 2426 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.151:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.151:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188da9c5ff4f7ca4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-24 12:14:56.483523748 +0000 UTC m=+0.511136781,LastTimestamp:2026-01-24 12:14:56.483523748 +0000 UTC m=+0.511136781,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 24 12:14:56.497614 kubelet[2426]: E0124 12:14:56.496858 2426 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 12:14:56.497614 kubelet[2426]: I0124 12:14:56.496891 2426 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 12:14:56.497831 kubelet[2426]: I0124 12:14:56.496897 2426 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 12:14:56.497831 kubelet[2426]: E0124 12:14:56.497322 2426 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.151:6443: connect: connection refused" interval="200ms" Jan 24 12:14:56.497831 kubelet[2426]: I0124 12:14:56.497735 2426 reconciler.go:26] "Reconciler: start to sync state" Jan 24 12:14:56.498047 kubelet[2426]: W0124 12:14:56.497944 2426 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Jan 24 12:14:56.498203 kubelet[2426]: E0124 12:14:56.498056 2426 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" Jan 24 12:14:56.500516 kubelet[2426]: I0124 12:14:56.500339 2426 factory.go:221] Registration of the containerd container factory successfully Jan 24 12:14:56.500516 kubelet[2426]: I0124 12:14:56.500466 2426 factory.go:221] Registration of the systemd container factory successfully Jan 24 12:14:56.500600 kubelet[2426]: I0124 12:14:56.500561 2426 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 12:14:56.506000 audit[2441]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2441 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:14:56.506000 audit[2441]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe07832300 a2=0 a3=0 items=0 ppid=2426 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:56.506000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 24 12:14:56.510000 audit[2442]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2442 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:14:56.510000 audit[2442]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc7c4372d0 a2=0 a3=0 items=0 ppid=2426 pid=2442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:56.510000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 24 12:14:56.518000 audit[2445]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2445 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:14:56.518000 audit[2445]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffff93f8ad0 a2=0 a3=0 items=0 ppid=2426 pid=2445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:56.518000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 24 12:14:56.525000 audit[2449]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2449 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:14:56.525000 audit[2449]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff9d8b32d0 a2=0 a3=0 items=0 ppid=2426 pid=2449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:56.525000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 24 12:14:56.528870 kubelet[2426]: I0124 12:14:56.528819 2426 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 12:14:56.528870 kubelet[2426]: I0124 12:14:56.528839 2426 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 12:14:56.528870 kubelet[2426]: I0124 12:14:56.528857 2426 state_mem.go:36] "Initialized new in-memory state store" Jan 24 12:14:56.597877 kubelet[2426]: E0124 12:14:56.597690 2426 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 12:14:56.617579 kubelet[2426]: I0124 12:14:56.617485 2426 policy_none.go:49] "None policy: Start" Jan 24 12:14:56.617579 kubelet[2426]: I0124 12:14:56.617560 2426 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 12:14:56.617579 kubelet[2426]: I0124 12:14:56.617582 2426 state_mem.go:35] "Initializing new in-memory state store" Jan 24 12:14:56.625000 audit[2452]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2452 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:14:56.625000 audit[2452]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffcb8777e90 a2=0 a3=0 items=0 ppid=2426 pid=2452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:56.625000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 24 12:14:56.628931 kubelet[2426]: I0124 12:14:56.628666 2426 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 24 12:14:56.631000 audit[2455]: NETFILTER_CFG table=mangle:47 family=2 entries=1 op=nft_register_chain pid=2455 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:14:56.631000 audit[2455]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffbbd9c3c0 a2=0 a3=0 items=0 ppid=2426 pid=2455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:56.631000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 24 12:14:56.632000 audit[2454]: NETFILTER_CFG table=mangle:48 family=10 entries=2 op=nft_register_chain pid=2454 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:14:56.632000 audit[2454]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe88244660 a2=0 a3=0 items=0 ppid=2426 pid=2454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:56.632000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 24 12:14:56.633552 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 24 12:14:56.634013 kubelet[2426]: I0124 12:14:56.633807 2426 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 24 12:14:56.634013 kubelet[2426]: I0124 12:14:56.633827 2426 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 24 12:14:56.634013 kubelet[2426]: I0124 12:14:56.633848 2426 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 12:14:56.634013 kubelet[2426]: I0124 12:14:56.633856 2426 kubelet.go:2382] "Starting kubelet main sync loop" Jan 24 12:14:56.634013 kubelet[2426]: E0124 12:14:56.633916 2426 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 12:14:56.637985 kubelet[2426]: W0124 12:14:56.637670 2426 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.151:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Jan 24 12:14:56.637985 kubelet[2426]: E0124 12:14:56.637770 2426 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.151:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" Jan 24 12:14:56.636000 audit[2456]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2456 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:14:56.636000 audit[2456]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc42fe9180 a2=0 a3=0 items=0 ppid=2426 pid=2456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:56.636000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 24 12:14:56.638000 audit[2457]: NETFILTER_CFG table=mangle:50 family=10 entries=1 op=nft_register_chain pid=2457 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:14:56.638000 audit[2457]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff39f9fd20 a2=0 a3=0 items=0 ppid=2426 pid=2457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:56.638000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 24 12:14:56.640000 audit[2460]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2460 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:14:56.640000 audit[2460]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd17e78350 a2=0 a3=0 items=0 ppid=2426 pid=2460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:56.640000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 24 12:14:56.642000 audit[2461]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2461 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:14:56.642000 audit[2461]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe157354a0 a2=0 a3=0 items=0 ppid=2426 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:56.642000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 24 12:14:56.646000 audit[2462]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2462 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:14:56.646000 audit[2462]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd410e1de0 a2=0 a3=0 items=0 ppid=2426 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:56.646000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 24 12:14:56.648577 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 24 12:14:56.656005 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 24 12:14:56.673580 kubelet[2426]: I0124 12:14:56.673242 2426 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 24 12:14:56.673688 kubelet[2426]: I0124 12:14:56.673639 2426 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 12:14:56.673688 kubelet[2426]: I0124 12:14:56.673652 2426 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 12:14:56.674071 kubelet[2426]: I0124 12:14:56.673948 2426 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 12:14:56.675987 kubelet[2426]: E0124 12:14:56.675932 2426 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 12:14:56.675987 kubelet[2426]: E0124 12:14:56.675978 2426 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 24 12:14:56.698456 kubelet[2426]: E0124 12:14:56.698305 2426 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.151:6443: connect: connection refused" interval="400ms" Jan 24 12:14:56.749614 systemd[1]: Created slice kubepods-burstable-podcb11f14fc6bf4c2c58f8e940d97c6ef7.slice - libcontainer container kubepods-burstable-podcb11f14fc6bf4c2c58f8e940d97c6ef7.slice. Jan 24 12:14:56.772758 kubelet[2426]: E0124 12:14:56.772643 2426 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 12:14:56.775686 kubelet[2426]: I0124 12:14:56.775578 2426 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 12:14:56.776064 kubelet[2426]: E0124 12:14:56.775980 2426 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.151:6443/api/v1/nodes\": dial tcp 10.0.0.151:6443: connect: connection refused" node="localhost" Jan 24 12:14:56.778599 systemd[1]: Created slice kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice - libcontainer container kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice. Jan 24 12:14:56.784732 kubelet[2426]: E0124 12:14:56.784364 2426 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 12:14:56.789181 systemd[1]: Created slice kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice - libcontainer container kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice. Jan 24 12:14:56.792344 kubelet[2426]: E0124 12:14:56.792271 2426 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 12:14:56.798596 kubelet[2426]: I0124 12:14:56.798260 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cb11f14fc6bf4c2c58f8e940d97c6ef7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cb11f14fc6bf4c2c58f8e940d97c6ef7\") " pod="kube-system/kube-apiserver-localhost" Jan 24 12:14:56.798596 kubelet[2426]: I0124 12:14:56.798340 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cb11f14fc6bf4c2c58f8e940d97c6ef7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cb11f14fc6bf4c2c58f8e940d97c6ef7\") " pod="kube-system/kube-apiserver-localhost" Jan 24 12:14:56.798596 kubelet[2426]: I0124 12:14:56.798367 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 12:14:56.798596 kubelet[2426]: I0124 12:14:56.798393 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 12:14:56.798596 kubelet[2426]: I0124 12:14:56.798490 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 12:14:56.798834 kubelet[2426]: I0124 12:14:56.798512 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 24 12:14:56.798834 kubelet[2426]: I0124 12:14:56.798530 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cb11f14fc6bf4c2c58f8e940d97c6ef7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cb11f14fc6bf4c2c58f8e940d97c6ef7\") " pod="kube-system/kube-apiserver-localhost" Jan 24 12:14:56.798834 kubelet[2426]: I0124 12:14:56.798547 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 12:14:56.798834 kubelet[2426]: I0124 12:14:56.798567 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 12:14:56.979608 kubelet[2426]: I0124 12:14:56.979236 2426 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 12:14:56.979608 kubelet[2426]: E0124 12:14:56.979522 2426 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.151:6443/api/v1/nodes\": dial tcp 10.0.0.151:6443: connect: connection refused" node="localhost" Jan 24 12:14:57.073809 kubelet[2426]: E0124 12:14:57.073559 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:14:57.075819 containerd[1599]: time="2026-01-24T12:14:57.074571991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cb11f14fc6bf4c2c58f8e940d97c6ef7,Namespace:kube-system,Attempt:0,}" Jan 24 12:14:57.085818 kubelet[2426]: E0124 12:14:57.085068 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:14:57.086240 containerd[1599]: time="2026-01-24T12:14:57.085931144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,}" Jan 24 12:14:57.093075 kubelet[2426]: E0124 12:14:57.092868 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:14:57.093956 containerd[1599]: time="2026-01-24T12:14:57.093777758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,}" Jan 24 12:14:57.100248 kubelet[2426]: E0124 12:14:57.100026 2426 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.151:6443: connect: connection refused" interval="800ms" Jan 24 12:14:57.225288 containerd[1599]: time="2026-01-24T12:14:57.225049801Z" level=info msg="connecting to shim 2b0ddd85f317ee3522c7d1db5ebef97e0550548e14ed04c6705ba7263932a3f0" address="unix:///run/containerd/s/e77fc2026df8bb7d9ad33ff2afbbe6223d975b055ebd4e1f7f8157264b2d2d75" namespace=k8s.io protocol=ttrpc version=3 Jan 24 12:14:57.228956 containerd[1599]: time="2026-01-24T12:14:57.228913466Z" level=info msg="connecting to shim 7890cca462230c12020547accc2d335447753ae3df413734137f906e6cb632a9" address="unix:///run/containerd/s/db128288e46044158a3158a1494a65b072672b9277d430cdd6b59347a8e11050" namespace=k8s.io protocol=ttrpc version=3 Jan 24 12:14:57.243285 containerd[1599]: time="2026-01-24T12:14:57.242830010Z" level=info msg="connecting to shim 0a6a636c9c67b0ccaaccc9d93707d60991d22f9988da2a6b03ce7bf05aa212a9" address="unix:///run/containerd/s/c5fa72473cfe1464f7b81084c6e52a52277010ecf642cc7c0ce0fcc3fc9af3ee" namespace=k8s.io protocol=ttrpc version=3 Jan 24 12:14:57.288599 systemd[1]: Started cri-containerd-2b0ddd85f317ee3522c7d1db5ebef97e0550548e14ed04c6705ba7263932a3f0.scope - libcontainer container 2b0ddd85f317ee3522c7d1db5ebef97e0550548e14ed04c6705ba7263932a3f0. Jan 24 12:14:57.290753 systemd[1]: Started cri-containerd-7890cca462230c12020547accc2d335447753ae3df413734137f906e6cb632a9.scope - libcontainer container 7890cca462230c12020547accc2d335447753ae3df413734137f906e6cb632a9. Jan 24 12:14:57.302342 systemd[1]: Started cri-containerd-0a6a636c9c67b0ccaaccc9d93707d60991d22f9988da2a6b03ce7bf05aa212a9.scope - libcontainer container 0a6a636c9c67b0ccaaccc9d93707d60991d22f9988da2a6b03ce7bf05aa212a9. Jan 24 12:14:57.321000 audit: BPF prog-id=83 op=LOAD Jan 24 12:14:57.323000 audit: BPF prog-id=84 op=LOAD Jan 24 12:14:57.323000 audit[2532]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2503 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:57.323000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061366136333663396336376230636361616363633964393337303764 Jan 24 12:14:57.323000 audit: BPF prog-id=84 op=UNLOAD Jan 24 12:14:57.323000 audit[2532]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2503 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:57.323000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061366136333663396336376230636361616363633964393337303764 Jan 24 12:14:57.323000 audit: BPF prog-id=85 op=LOAD Jan 24 12:14:57.324000 audit: BPF prog-id=86 op=LOAD Jan 24 12:14:57.324000 audit[2532]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2503 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:57.324000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061366136333663396336376230636361616363633964393337303764 Jan 24 12:14:57.325000 audit: BPF prog-id=87 op=LOAD Jan 24 12:14:57.325000 audit[2521]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2481 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:57.325000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738393063636134363232333063313230323035343761636363326433 Jan 24 12:14:57.325000 audit: BPF prog-id=87 op=UNLOAD Jan 24 12:14:57.325000 audit[2521]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2481 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:57.325000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738393063636134363232333063313230323035343761636363326433 Jan 24 12:14:57.325000 audit: BPF prog-id=88 op=LOAD Jan 24 12:14:57.325000 audit[2532]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2503 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:57.325000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061366136333663396336376230636361616363633964393337303764 Jan 24 12:14:57.326000 audit: BPF prog-id=88 op=UNLOAD Jan 24 12:14:57.326000 audit[2532]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2503 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:57.326000 audit: BPF prog-id=89 op=LOAD Jan 24 12:14:57.326000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061366136333663396336376230636361616363633964393337303764 Jan 24 12:14:57.326000 audit: BPF prog-id=86 op=UNLOAD Jan 24 12:14:57.326000 audit[2532]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2503 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:57.326000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061366136333663396336376230636361616363633964393337303764 Jan 24 12:14:57.326000 audit: BPF prog-id=90 op=LOAD Jan 24 12:14:57.326000 audit[2532]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2503 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:57.326000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061366136333663396336376230636361616363633964393337303764 Jan 24 12:14:57.326000 audit[2521]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2481 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:57.326000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738393063636134363232333063313230323035343761636363326433 Jan 24 12:14:57.327000 audit: BPF prog-id=91 op=LOAD Jan 24 12:14:57.327000 audit[2521]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2481 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:57.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738393063636134363232333063313230323035343761636363326433 Jan 24 12:14:57.328000 audit: BPF prog-id=91 op=UNLOAD Jan 24 12:14:57.328000 audit[2521]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2481 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:57.328000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738393063636134363232333063313230323035343761636363326433 Jan 24 12:14:57.328000 audit: BPF prog-id=89 op=UNLOAD Jan 24 12:14:57.328000 audit[2521]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2481 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:57.328000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738393063636134363232333063313230323035343761636363326433 Jan 24 12:14:57.328000 audit: BPF prog-id=92 op=LOAD Jan 24 12:14:57.328000 audit[2521]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2481 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:57.328000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738393063636134363232333063313230323035343761636363326433 Jan 24 12:14:57.331000 audit: BPF prog-id=93 op=LOAD Jan 24 12:14:57.332000 audit: BPF prog-id=94 op=LOAD Jan 24 12:14:57.332000 audit[2514]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2476 pid=2514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:57.332000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262306464643835663331376565333532326337643164623565626566 Jan 24 12:14:57.333000 audit: BPF prog-id=94 op=UNLOAD Jan 24 12:14:57.333000 audit[2514]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2476 pid=2514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:57.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262306464643835663331376565333532326337643164623565626566 Jan 24 12:14:57.333000 audit: BPF prog-id=95 op=LOAD Jan 24 12:14:57.333000 audit[2514]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2476 pid=2514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:57.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262306464643835663331376565333532326337643164623565626566 Jan 24 12:14:57.334000 audit: BPF prog-id=96 op=LOAD Jan 24 12:14:57.334000 audit[2514]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2476 pid=2514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:57.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262306464643835663331376565333532326337643164623565626566 Jan 24 12:14:57.334000 audit: BPF prog-id=96 op=UNLOAD Jan 24 12:14:57.334000 audit[2514]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2476 pid=2514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:57.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262306464643835663331376565333532326337643164623565626566 Jan 24 12:14:57.334000 audit: BPF prog-id=95 op=UNLOAD Jan 24 12:14:57.334000 audit[2514]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2476 pid=2514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:57.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262306464643835663331376565333532326337643164623565626566 Jan 24 12:14:57.334000 audit: BPF prog-id=97 op=LOAD Jan 24 12:14:57.334000 audit[2514]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2476 pid=2514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:57.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262306464643835663331376565333532326337643164623565626566 Jan 24 12:14:57.382722 kubelet[2426]: I0124 12:14:57.382692 2426 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 12:14:57.383863 kubelet[2426]: E0124 12:14:57.383785 2426 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.151:6443/api/v1/nodes\": dial tcp 10.0.0.151:6443: connect: connection refused" node="localhost" Jan 24 12:14:57.403466 kubelet[2426]: W0124 12:14:57.403369 2426 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Jan 24 12:14:57.403891 kubelet[2426]: E0124 12:14:57.403758 2426 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" Jan 24 12:14:57.413617 containerd[1599]: time="2026-01-24T12:14:57.413585821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b0ddd85f317ee3522c7d1db5ebef97e0550548e14ed04c6705ba7263932a3f0\"" Jan 24 12:14:57.415577 kubelet[2426]: E0124 12:14:57.415505 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:14:57.420352 containerd[1599]: time="2026-01-24T12:14:57.420292701Z" level=info msg="CreateContainer within sandbox \"2b0ddd85f317ee3522c7d1db5ebef97e0550548e14ed04c6705ba7263932a3f0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 24 12:14:57.433149 containerd[1599]: time="2026-01-24T12:14:57.432816423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cb11f14fc6bf4c2c58f8e940d97c6ef7,Namespace:kube-system,Attempt:0,} returns sandbox id \"7890cca462230c12020547accc2d335447753ae3df413734137f906e6cb632a9\"" Jan 24 12:14:57.434993 kubelet[2426]: E0124 12:14:57.434955 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:14:57.437061 containerd[1599]: time="2026-01-24T12:14:57.437019083Z" level=info msg="CreateContainer within sandbox \"7890cca462230c12020547accc2d335447753ae3df413734137f906e6cb632a9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 24 12:14:57.438182 containerd[1599]: time="2026-01-24T12:14:57.437996048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a6a636c9c67b0ccaaccc9d93707d60991d22f9988da2a6b03ce7bf05aa212a9\"" Jan 24 12:14:57.439232 kubelet[2426]: E0124 12:14:57.438979 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:14:57.440167 kubelet[2426]: W0124 12:14:57.440005 2426 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Jan 24 12:14:57.440223 kubelet[2426]: E0124 12:14:57.440068 2426 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" Jan 24 12:14:57.441060 containerd[1599]: time="2026-01-24T12:14:57.440783453Z" level=info msg="CreateContainer within sandbox \"0a6a636c9c67b0ccaaccc9d93707d60991d22f9988da2a6b03ce7bf05aa212a9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 24 12:14:57.445063 containerd[1599]: time="2026-01-24T12:14:57.444999309Z" level=info msg="Container 250e1d5208e06f804f65c8db7deb0e95f2d724a0b0b371f2f94c99aa37433318: CDI devices from CRI Config.CDIDevices: []" Jan 24 12:14:57.463565 containerd[1599]: time="2026-01-24T12:14:57.463460912Z" level=info msg="CreateContainer within sandbox \"2b0ddd85f317ee3522c7d1db5ebef97e0550548e14ed04c6705ba7263932a3f0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"250e1d5208e06f804f65c8db7deb0e95f2d724a0b0b371f2f94c99aa37433318\"" Jan 24 12:14:57.465856 containerd[1599]: time="2026-01-24T12:14:57.465628788Z" level=info msg="StartContainer for \"250e1d5208e06f804f65c8db7deb0e95f2d724a0b0b371f2f94c99aa37433318\"" Jan 24 12:14:57.467718 containerd[1599]: time="2026-01-24T12:14:57.467599576Z" level=info msg="Container 9997120e7627b3f29b938d25dd0d90b297c62abbc8fb630d6ad96bea0a2fe0e9: CDI devices from CRI Config.CDIDevices: []" Jan 24 12:14:57.467919 containerd[1599]: time="2026-01-24T12:14:57.467856149Z" level=info msg="connecting to shim 250e1d5208e06f804f65c8db7deb0e95f2d724a0b0b371f2f94c99aa37433318" address="unix:///run/containerd/s/e77fc2026df8bb7d9ad33ff2afbbe6223d975b055ebd4e1f7f8157264b2d2d75" protocol=ttrpc version=3 Jan 24 12:14:57.472331 containerd[1599]: time="2026-01-24T12:14:57.472269563Z" level=info msg="Container 565af670919ddd56d91680f5d448f67be146c34cecc0232688c999c294ca57a6: CDI devices from CRI Config.CDIDevices: []" Jan 24 12:14:57.478908 containerd[1599]: time="2026-01-24T12:14:57.478717842Z" level=info msg="CreateContainer within sandbox \"0a6a636c9c67b0ccaaccc9d93707d60991d22f9988da2a6b03ce7bf05aa212a9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9997120e7627b3f29b938d25dd0d90b297c62abbc8fb630d6ad96bea0a2fe0e9\"" Jan 24 12:14:57.481573 containerd[1599]: time="2026-01-24T12:14:57.481069768Z" level=info msg="StartContainer for \"9997120e7627b3f29b938d25dd0d90b297c62abbc8fb630d6ad96bea0a2fe0e9\"" Jan 24 12:14:57.484820 containerd[1599]: time="2026-01-24T12:14:57.484734101Z" level=info msg="connecting to shim 9997120e7627b3f29b938d25dd0d90b297c62abbc8fb630d6ad96bea0a2fe0e9" address="unix:///run/containerd/s/c5fa72473cfe1464f7b81084c6e52a52277010ecf642cc7c0ce0fcc3fc9af3ee" protocol=ttrpc version=3 Jan 24 12:14:57.488491 containerd[1599]: time="2026-01-24T12:14:57.488322232Z" level=info msg="CreateContainer within sandbox \"7890cca462230c12020547accc2d335447753ae3df413734137f906e6cb632a9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"565af670919ddd56d91680f5d448f67be146c34cecc0232688c999c294ca57a6\"" Jan 24 12:14:57.489768 containerd[1599]: time="2026-01-24T12:14:57.489635945Z" level=info msg="StartContainer for \"565af670919ddd56d91680f5d448f67be146c34cecc0232688c999c294ca57a6\"" Jan 24 12:14:57.491254 containerd[1599]: time="2026-01-24T12:14:57.490876302Z" level=info msg="connecting to shim 565af670919ddd56d91680f5d448f67be146c34cecc0232688c999c294ca57a6" address="unix:///run/containerd/s/db128288e46044158a3158a1494a65b072672b9277d430cdd6b59347a8e11050" protocol=ttrpc version=3 Jan 24 12:14:57.498481 systemd[1]: Started cri-containerd-250e1d5208e06f804f65c8db7deb0e95f2d724a0b0b371f2f94c99aa37433318.scope - libcontainer container 250e1d5208e06f804f65c8db7deb0e95f2d724a0b0b371f2f94c99aa37433318. Jan 24 12:14:57.517486 systemd[1]: Started cri-containerd-9997120e7627b3f29b938d25dd0d90b297c62abbc8fb630d6ad96bea0a2fe0e9.scope - libcontainer container 9997120e7627b3f29b938d25dd0d90b297c62abbc8fb630d6ad96bea0a2fe0e9. Jan 24 12:14:57.543460 systemd[1]: Started cri-containerd-565af670919ddd56d91680f5d448f67be146c34cecc0232688c999c294ca57a6.scope - libcontainer container 565af670919ddd56d91680f5d448f67be146c34cecc0232688c999c294ca57a6. Jan 24 12:14:57.546000 audit: BPF prog-id=98 op=LOAD Jan 24 12:14:57.548000 audit: BPF prog-id=99 op=LOAD Jan 24 12:14:57.548000 audit[2601]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2476 pid=2601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:57.548000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235306531643532303865303666383034663635633864623764656230 Jan 24 12:14:57.548000 audit: BPF prog-id=99 op=UNLOAD Jan 24 12:14:57.548000 audit[2601]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2476 pid=2601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:57.548000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235306531643532303865303666383034663635633864623764656230 Jan 24 12:14:57.551000 audit: BPF prog-id=100 op=LOAD Jan 24 12:14:57.551000 audit[2601]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2476 pid=2601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:57.551000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235306531643532303865303666383034663635633864623764656230 Jan 24 12:14:57.552000 audit: BPF prog-id=101 op=LOAD Jan 24 12:14:57.552000 audit[2601]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2476 pid=2601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:57.552000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235306531643532303865303666383034663635633864623764656230 Jan 24 12:14:57.554000 audit: BPF prog-id=101 op=UNLOAD Jan 24 12:14:57.554000 audit[2601]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2476 pid=2601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:57.554000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235306531643532303865303666383034663635633864623764656230 Jan 24 12:14:57.554000 audit: BPF prog-id=100 op=UNLOAD Jan 24 12:14:57.554000 audit[2601]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2476 pid=2601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:57.554000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235306531643532303865303666383034663635633864623764656230 Jan 24 12:14:57.554000 audit: BPF prog-id=102 op=LOAD Jan 24 12:14:57.555000 audit: BPF prog-id=103 op=LOAD Jan 24 12:14:57.555000 audit[2601]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2476 pid=2601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:57.555000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235306531643532303865303666383034663635633864623764656230 Jan 24 12:14:57.556000 audit: BPF prog-id=104 op=LOAD Jan 24 12:14:57.556000 audit[2613]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2503 pid=2613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:57.556000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939393731323065373632376233663239623933386432356464306439 Jan 24 12:14:57.556000 audit: BPF prog-id=104 op=UNLOAD Jan 24 12:14:57.556000 audit[2613]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2503 pid=2613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:57.556000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939393731323065373632376233663239623933386432356464306439 Jan 24 12:14:57.557000 audit: BPF prog-id=105 op=LOAD Jan 24 12:14:57.557000 audit[2613]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2503 pid=2613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:57.557000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939393731323065373632376233663239623933386432356464306439 Jan 24 12:14:57.558000 audit: BPF prog-id=106 op=LOAD Jan 24 12:14:57.558000 audit[2613]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2503 pid=2613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:57.558000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939393731323065373632376233663239623933386432356464306439 Jan 24 12:14:57.558000 audit: BPF prog-id=106 op=UNLOAD Jan 24 12:14:57.558000 audit[2613]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2503 pid=2613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:57.558000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939393731323065373632376233663239623933386432356464306439 Jan 24 12:14:57.558000 audit: BPF prog-id=105 op=UNLOAD Jan 24 12:14:57.558000 audit[2613]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2503 pid=2613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:57.558000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939393731323065373632376233663239623933386432356464306439 Jan 24 12:14:57.558000 audit: BPF prog-id=107 op=LOAD Jan 24 12:14:57.558000 audit[2613]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2503 pid=2613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:57.558000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939393731323065373632376233663239623933386432356464306439 Jan 24 12:14:57.569000 audit: BPF prog-id=108 op=LOAD Jan 24 12:14:57.570000 audit: BPF prog-id=109 op=LOAD Jan 24 12:14:57.570000 audit[2627]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2481 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:57.570000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536356166363730393139646464353664393136383066356434343866 Jan 24 12:14:57.571000 audit: BPF prog-id=109 op=UNLOAD Jan 24 12:14:57.571000 audit[2627]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2481 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:57.571000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536356166363730393139646464353664393136383066356434343866 Jan 24 12:14:57.571000 audit: BPF prog-id=110 op=LOAD Jan 24 12:14:57.571000 audit[2627]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2481 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:57.571000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536356166363730393139646464353664393136383066356434343866 Jan 24 12:14:57.572000 audit: BPF prog-id=111 op=LOAD Jan 24 12:14:57.572000 audit[2627]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2481 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:57.572000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536356166363730393139646464353664393136383066356434343866 Jan 24 12:14:57.572000 audit: BPF prog-id=111 op=UNLOAD Jan 24 12:14:57.572000 audit[2627]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2481 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:57.572000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536356166363730393139646464353664393136383066356434343866 Jan 24 12:14:57.572000 audit: BPF prog-id=110 op=UNLOAD Jan 24 12:14:57.572000 audit[2627]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2481 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:57.572000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536356166363730393139646464353664393136383066356434343866 Jan 24 12:14:57.572000 audit: BPF prog-id=112 op=LOAD Jan 24 12:14:57.572000 audit[2627]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2481 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:14:57.572000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536356166363730393139646464353664393136383066356434343866 Jan 24 12:14:57.650882 containerd[1599]: time="2026-01-24T12:14:57.650803728Z" level=info msg="StartContainer for \"9997120e7627b3f29b938d25dd0d90b297c62abbc8fb630d6ad96bea0a2fe0e9\" returns successfully" Jan 24 12:14:57.659298 containerd[1599]: time="2026-01-24T12:14:57.659221779Z" level=info msg="StartContainer for \"565af670919ddd56d91680f5d448f67be146c34cecc0232688c999c294ca57a6\" returns successfully" Jan 24 12:14:57.683375 containerd[1599]: time="2026-01-24T12:14:57.682190304Z" level=info msg="StartContainer for \"250e1d5208e06f804f65c8db7deb0e95f2d724a0b0b371f2f94c99aa37433318\" returns successfully" Jan 24 12:14:58.187783 kubelet[2426]: I0124 12:14:58.187704 2426 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 12:14:58.697363 kubelet[2426]: E0124 12:14:58.697272 2426 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 12:14:58.697556 kubelet[2426]: E0124 12:14:58.697483 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:14:58.702769 kubelet[2426]: E0124 12:14:58.702580 2426 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 12:14:58.702769 kubelet[2426]: E0124 12:14:58.702744 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:14:58.703153 kubelet[2426]: E0124 12:14:58.702917 2426 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 12:14:58.703153 kubelet[2426]: E0124 12:14:58.703002 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:14:59.606907 kubelet[2426]: E0124 12:14:59.606788 2426 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 24 12:14:59.710385 kubelet[2426]: E0124 12:14:59.710291 2426 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 12:14:59.710643 kubelet[2426]: E0124 12:14:59.710564 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:14:59.712040 kubelet[2426]: E0124 12:14:59.711961 2426 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 12:14:59.713988 kubelet[2426]: I0124 12:14:59.713903 2426 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 24 12:14:59.714037 kubelet[2426]: E0124 12:14:59.713989 2426 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 24 12:14:59.714373 kubelet[2426]: E0124 12:14:59.714303 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:14:59.743669 kubelet[2426]: E0124 12:14:59.743573 2426 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 12:14:59.844723 kubelet[2426]: E0124 12:14:59.844603 2426 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 12:14:59.947394 kubelet[2426]: E0124 12:14:59.945927 2426 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 12:14:59.997796 kubelet[2426]: I0124 12:14:59.997574 2426 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 24 12:15:00.018726 kubelet[2426]: E0124 12:15:00.018637 2426 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 24 12:15:00.018726 kubelet[2426]: I0124 12:15:00.018671 2426 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 24 12:15:00.023284 kubelet[2426]: E0124 12:15:00.022879 2426 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 24 12:15:00.023284 kubelet[2426]: I0124 12:15:00.022907 2426 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 24 12:15:00.026542 kubelet[2426]: E0124 12:15:00.026463 2426 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 24 12:15:00.428765 kubelet[2426]: I0124 12:15:00.428610 2426 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 24 12:15:00.432453 kubelet[2426]: E0124 12:15:00.432183 2426 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 24 12:15:00.432527 kubelet[2426]: E0124 12:15:00.432504 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:15:00.479040 kubelet[2426]: I0124 12:15:00.478889 2426 apiserver.go:52] "Watching apiserver" Jan 24 12:15:00.499046 kubelet[2426]: I0124 12:15:00.498875 2426 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 12:15:00.708185 kubelet[2426]: I0124 12:15:00.707365 2426 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 24 12:15:00.710787 kubelet[2426]: I0124 12:15:00.707931 2426 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 24 12:15:00.719207 kubelet[2426]: E0124 12:15:00.718956 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:15:00.726209 kubelet[2426]: E0124 12:15:00.725785 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:15:01.710631 kubelet[2426]: E0124 12:15:01.710554 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:15:01.711545 kubelet[2426]: E0124 12:15:01.711228 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:15:02.414053 systemd[1]: Reload requested from client PID 2704 ('systemctl') (unit session-10.scope)... Jan 24 12:15:02.414211 systemd[1]: Reloading... Jan 24 12:15:02.510327 zram_generator::config[2750]: No configuration found. Jan 24 12:15:02.780834 systemd[1]: Reloading finished in 366 ms. Jan 24 12:15:02.825274 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 12:15:02.855528 systemd[1]: kubelet.service: Deactivated successfully. Jan 24 12:15:02.855962 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 12:15:02.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:15:02.856244 systemd[1]: kubelet.service: Consumed 1.373s CPU time, 133.8M memory peak. Jan 24 12:15:02.861734 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 24 12:15:02.861800 kernel: audit: type=1131 audit(1769256902.854:384): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:15:02.860610 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 12:15:02.864000 audit: BPF prog-id=113 op=LOAD Jan 24 12:15:02.886966 kernel: audit: type=1334 audit(1769256902.864:385): prog-id=113 op=LOAD Jan 24 12:15:02.887279 kernel: audit: type=1334 audit(1769256902.864:386): prog-id=71 op=UNLOAD Jan 24 12:15:02.864000 audit: BPF prog-id=71 op=UNLOAD Jan 24 12:15:02.864000 audit: BPF prog-id=114 op=LOAD Jan 24 12:15:02.900682 kernel: audit: type=1334 audit(1769256902.864:387): prog-id=114 op=LOAD Jan 24 12:15:02.864000 audit: BPF prog-id=115 op=LOAD Jan 24 12:15:02.907345 kernel: audit: type=1334 audit(1769256902.864:388): prog-id=115 op=LOAD Jan 24 12:15:02.907391 kernel: audit: type=1334 audit(1769256902.864:389): prog-id=72 op=UNLOAD Jan 24 12:15:02.864000 audit: BPF prog-id=72 op=UNLOAD Jan 24 12:15:02.864000 audit: BPF prog-id=73 op=UNLOAD Jan 24 12:15:02.915805 kernel: audit: type=1334 audit(1769256902.864:390): prog-id=73 op=UNLOAD Jan 24 12:15:02.915858 kernel: audit: type=1334 audit(1769256902.865:391): prog-id=116 op=LOAD Jan 24 12:15:02.915888 kernel: audit: type=1334 audit(1769256902.865:392): prog-id=63 op=UNLOAD Jan 24 12:15:02.915907 kernel: audit: type=1334 audit(1769256902.865:393): prog-id=117 op=LOAD Jan 24 12:15:02.865000 audit: BPF prog-id=116 op=LOAD Jan 24 12:15:02.865000 audit: BPF prog-id=63 op=UNLOAD Jan 24 12:15:02.865000 audit: BPF prog-id=117 op=LOAD Jan 24 12:15:02.865000 audit: BPF prog-id=118 op=LOAD Jan 24 12:15:02.865000 audit: BPF prog-id=64 op=UNLOAD Jan 24 12:15:02.865000 audit: BPF prog-id=65 op=UNLOAD Jan 24 12:15:02.866000 audit: BPF prog-id=119 op=LOAD Jan 24 12:15:02.866000 audit: BPF prog-id=70 op=UNLOAD Jan 24 12:15:02.867000 audit: BPF prog-id=120 op=LOAD Jan 24 12:15:02.867000 audit: BPF prog-id=80 op=UNLOAD Jan 24 12:15:02.867000 audit: BPF prog-id=121 op=LOAD Jan 24 12:15:02.867000 audit: BPF prog-id=122 op=LOAD Jan 24 12:15:02.867000 audit: BPF prog-id=81 op=UNLOAD Jan 24 12:15:02.867000 audit: BPF prog-id=82 op=UNLOAD Jan 24 12:15:02.868000 audit: BPF prog-id=123 op=LOAD Jan 24 12:15:02.868000 audit: BPF prog-id=67 op=UNLOAD Jan 24 12:15:02.868000 audit: BPF prog-id=124 op=LOAD Jan 24 12:15:02.868000 audit: BPF prog-id=125 op=LOAD Jan 24 12:15:02.868000 audit: BPF prog-id=68 op=UNLOAD Jan 24 12:15:02.868000 audit: BPF prog-id=69 op=UNLOAD Jan 24 12:15:02.871000 audit: BPF prog-id=126 op=LOAD Jan 24 12:15:02.871000 audit: BPF prog-id=127 op=LOAD Jan 24 12:15:02.871000 audit: BPF prog-id=78 op=UNLOAD Jan 24 12:15:02.871000 audit: BPF prog-id=79 op=UNLOAD Jan 24 12:15:02.872000 audit: BPF prog-id=128 op=LOAD Jan 24 12:15:02.872000 audit: BPF prog-id=75 op=UNLOAD Jan 24 12:15:02.872000 audit: BPF prog-id=129 op=LOAD Jan 24 12:15:02.872000 audit: BPF prog-id=130 op=LOAD Jan 24 12:15:02.872000 audit: BPF prog-id=76 op=UNLOAD Jan 24 12:15:02.872000 audit: BPF prog-id=77 op=UNLOAD Jan 24 12:15:02.873000 audit: BPF prog-id=131 op=LOAD Jan 24 12:15:02.874000 audit: BPF prog-id=66 op=UNLOAD Jan 24 12:15:02.875000 audit: BPF prog-id=132 op=LOAD Jan 24 12:15:02.875000 audit: BPF prog-id=74 op=UNLOAD Jan 24 12:15:03.118059 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 12:15:03.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:15:03.129582 (kubelet)[2795]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 12:15:03.198807 kubelet[2795]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 12:15:03.198807 kubelet[2795]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 12:15:03.198807 kubelet[2795]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 12:15:03.198807 kubelet[2795]: I0124 12:15:03.198312 2795 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 12:15:03.220642 kubelet[2795]: I0124 12:15:03.220461 2795 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 24 12:15:03.220642 kubelet[2795]: I0124 12:15:03.220573 2795 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 12:15:03.221336 kubelet[2795]: I0124 12:15:03.221265 2795 server.go:954] "Client rotation is on, will bootstrap in background" Jan 24 12:15:03.223564 kubelet[2795]: I0124 12:15:03.223343 2795 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 24 12:15:03.229875 kubelet[2795]: I0124 12:15:03.229491 2795 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 12:15:03.239668 kubelet[2795]: I0124 12:15:03.238888 2795 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 24 12:15:03.246033 kubelet[2795]: I0124 12:15:03.245853 2795 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 12:15:03.246387 kubelet[2795]: I0124 12:15:03.246303 2795 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 12:15:03.246602 kubelet[2795]: I0124 12:15:03.246366 2795 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 12:15:03.246602 kubelet[2795]: I0124 12:15:03.246590 2795 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 12:15:03.246602 kubelet[2795]: I0124 12:15:03.246600 2795 container_manager_linux.go:304] "Creating device plugin manager" Jan 24 12:15:03.246759 kubelet[2795]: I0124 12:15:03.246644 2795 state_mem.go:36] "Initialized new in-memory state store" Jan 24 12:15:03.246892 kubelet[2795]: I0124 12:15:03.246818 2795 kubelet.go:446] "Attempting to sync node with API server" Jan 24 12:15:03.246892 kubelet[2795]: I0124 12:15:03.246884 2795 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 12:15:03.246892 kubelet[2795]: I0124 12:15:03.246903 2795 kubelet.go:352] "Adding apiserver pod source" Jan 24 12:15:03.247047 kubelet[2795]: I0124 12:15:03.246913 2795 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 12:15:03.247960 kubelet[2795]: I0124 12:15:03.247860 2795 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 24 12:15:03.248620 kubelet[2795]: I0124 12:15:03.248541 2795 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 24 12:15:03.254787 kubelet[2795]: I0124 12:15:03.254230 2795 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 12:15:03.254787 kubelet[2795]: I0124 12:15:03.254341 2795 server.go:1287] "Started kubelet" Jan 24 12:15:03.255862 kubelet[2795]: I0124 12:15:03.255696 2795 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 12:15:03.256565 kubelet[2795]: I0124 12:15:03.256474 2795 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 12:15:03.257189 kubelet[2795]: I0124 12:15:03.256996 2795 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 12:15:03.260027 kubelet[2795]: I0124 12:15:03.259927 2795 server.go:479] "Adding debug handlers to kubelet server" Jan 24 12:15:03.261198 kubelet[2795]: I0124 12:15:03.260634 2795 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 12:15:03.264491 kubelet[2795]: I0124 12:15:03.264468 2795 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 12:15:03.276338 kubelet[2795]: I0124 12:15:03.276315 2795 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 12:15:03.277181 kubelet[2795]: I0124 12:15:03.276637 2795 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 12:15:03.277672 kubelet[2795]: I0124 12:15:03.277660 2795 reconciler.go:26] "Reconciler: start to sync state" Jan 24 12:15:03.282758 kubelet[2795]: I0124 12:15:03.282548 2795 factory.go:221] Registration of the systemd container factory successfully Jan 24 12:15:03.282758 kubelet[2795]: I0124 12:15:03.282692 2795 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 12:15:03.285497 kubelet[2795]: E0124 12:15:03.285480 2795 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 12:15:03.289381 kubelet[2795]: I0124 12:15:03.289309 2795 factory.go:221] Registration of the containerd container factory successfully Jan 24 12:15:03.306248 kubelet[2795]: I0124 12:15:03.305900 2795 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 24 12:15:03.318755 kubelet[2795]: I0124 12:15:03.318734 2795 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 24 12:15:03.318913 kubelet[2795]: I0124 12:15:03.318903 2795 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 24 12:15:03.318994 kubelet[2795]: I0124 12:15:03.318982 2795 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 12:15:03.319039 kubelet[2795]: I0124 12:15:03.319031 2795 kubelet.go:2382] "Starting kubelet main sync loop" Jan 24 12:15:03.319502 kubelet[2795]: E0124 12:15:03.319256 2795 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 12:15:03.382077 kubelet[2795]: I0124 12:15:03.381497 2795 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 12:15:03.382077 kubelet[2795]: I0124 12:15:03.381583 2795 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 12:15:03.382077 kubelet[2795]: I0124 12:15:03.381617 2795 state_mem.go:36] "Initialized new in-memory state store" Jan 24 12:15:03.382077 kubelet[2795]: I0124 12:15:03.381823 2795 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 24 12:15:03.382077 kubelet[2795]: I0124 12:15:03.381840 2795 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 24 12:15:03.382077 kubelet[2795]: I0124 12:15:03.381864 2795 policy_none.go:49] "None policy: Start" Jan 24 12:15:03.382077 kubelet[2795]: I0124 12:15:03.381875 2795 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 12:15:03.382077 kubelet[2795]: I0124 12:15:03.381888 2795 state_mem.go:35] "Initializing new in-memory state store" Jan 24 12:15:03.382077 kubelet[2795]: I0124 12:15:03.382010 2795 state_mem.go:75] "Updated machine memory state" Jan 24 12:15:03.399620 kubelet[2795]: I0124 12:15:03.398966 2795 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 24 12:15:03.400855 kubelet[2795]: I0124 12:15:03.400738 2795 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 12:15:03.403299 kubelet[2795]: I0124 12:15:03.401580 2795 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 12:15:03.405271 kubelet[2795]: I0124 12:15:03.404518 2795 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 12:15:03.405271 kubelet[2795]: E0124 12:15:03.404846 2795 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 12:15:03.420458 kubelet[2795]: I0124 12:15:03.419805 2795 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 24 12:15:03.421384 kubelet[2795]: I0124 12:15:03.421232 2795 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 24 12:15:03.422453 kubelet[2795]: I0124 12:15:03.421544 2795 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 24 12:15:03.448197 kubelet[2795]: E0124 12:15:03.448029 2795 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 24 12:15:03.450744 kubelet[2795]: E0124 12:15:03.450219 2795 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 24 12:15:03.479215 kubelet[2795]: I0124 12:15:03.478672 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 12:15:03.479215 kubelet[2795]: I0124 12:15:03.478801 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 24 12:15:03.479215 kubelet[2795]: I0124 12:15:03.478825 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cb11f14fc6bf4c2c58f8e940d97c6ef7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cb11f14fc6bf4c2c58f8e940d97c6ef7\") " pod="kube-system/kube-apiserver-localhost" Jan 24 12:15:03.479215 kubelet[2795]: I0124 12:15:03.478841 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cb11f14fc6bf4c2c58f8e940d97c6ef7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cb11f14fc6bf4c2c58f8e940d97c6ef7\") " pod="kube-system/kube-apiserver-localhost" Jan 24 12:15:03.479215 kubelet[2795]: I0124 12:15:03.479011 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 12:15:03.479504 kubelet[2795]: I0124 12:15:03.479034 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 12:15:03.479504 kubelet[2795]: I0124 12:15:03.479053 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cb11f14fc6bf4c2c58f8e940d97c6ef7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cb11f14fc6bf4c2c58f8e940d97c6ef7\") " pod="kube-system/kube-apiserver-localhost" Jan 24 12:15:03.479504 kubelet[2795]: I0124 12:15:03.479075 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 12:15:03.479504 kubelet[2795]: I0124 12:15:03.479268 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 12:15:03.535515 kubelet[2795]: I0124 12:15:03.534937 2795 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 12:15:03.564533 kubelet[2795]: I0124 12:15:03.563061 2795 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 24 12:15:03.567877 kubelet[2795]: I0124 12:15:03.567858 2795 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 24 12:15:03.743246 kubelet[2795]: E0124 12:15:03.742950 2795 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:15:03.749592 kubelet[2795]: E0124 12:15:03.748868 2795 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:15:03.750789 kubelet[2795]: E0124 12:15:03.750769 2795 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:15:04.248251 kubelet[2795]: I0124 12:15:04.248075 2795 apiserver.go:52] "Watching apiserver" Jan 24 12:15:04.278513 kubelet[2795]: I0124 12:15:04.278306 2795 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 12:15:04.352750 kubelet[2795]: I0124 12:15:04.352559 2795 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 24 12:15:04.353218 kubelet[2795]: I0124 12:15:04.353192 2795 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 24 12:15:04.355552 kubelet[2795]: E0124 12:15:04.355354 2795 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:15:04.382212 kubelet[2795]: E0124 12:15:04.381703 2795 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 24 12:15:04.382212 kubelet[2795]: E0124 12:15:04.381857 2795 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:15:04.382710 kubelet[2795]: E0124 12:15:04.382696 2795 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 24 12:15:04.383488 kubelet[2795]: E0124 12:15:04.383471 2795 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:15:04.395172 kubelet[2795]: I0124 12:15:04.394682 2795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.394668609 podStartE2EDuration="4.394668609s" podCreationTimestamp="2026-01-24 12:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 12:15:04.394496328 +0000 UTC m=+1.258938641" watchObservedRunningTime="2026-01-24 12:15:04.394668609 +0000 UTC m=+1.259110921" Jan 24 12:15:04.414164 kubelet[2795]: I0124 12:15:04.414028 2795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.414011703 podStartE2EDuration="1.414011703s" podCreationTimestamp="2026-01-24 12:15:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 12:15:04.412632161 +0000 UTC m=+1.277074473" watchObservedRunningTime="2026-01-24 12:15:04.414011703 +0000 UTC m=+1.278454025" Jan 24 12:15:04.450940 kubelet[2795]: I0124 12:15:04.450803 2795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.45078336 podStartE2EDuration="4.45078336s" podCreationTimestamp="2026-01-24 12:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 12:15:04.433320399 +0000 UTC m=+1.297762711" watchObservedRunningTime="2026-01-24 12:15:04.45078336 +0000 UTC m=+1.315225672" Jan 24 12:15:05.355601 kubelet[2795]: E0124 12:15:05.355549 2795 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:15:05.356571 kubelet[2795]: E0124 12:15:05.355640 2795 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:15:06.359396 kubelet[2795]: E0124 12:15:06.359351 2795 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:15:07.876602 kubelet[2795]: I0124 12:15:07.876349 2795 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 24 12:15:07.877803 containerd[1599]: time="2026-01-24T12:15:07.877769579Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 24 12:15:07.878594 kubelet[2795]: I0124 12:15:07.878528 2795 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 24 12:15:08.170268 update_engine[1581]: I20260124 12:15:08.169983 1581 update_attempter.cc:509] Updating boot flags... Jan 24 12:15:08.779360 systemd[1]: Created slice kubepods-besteffort-pod1dacf8e6_abea_4185_b80a_e90ac57e7890.slice - libcontainer container kubepods-besteffort-pod1dacf8e6_abea_4185_b80a_e90ac57e7890.slice. Jan 24 12:15:08.823340 kubelet[2795]: I0124 12:15:08.823013 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1dacf8e6-abea-4185-b80a-e90ac57e7890-kube-proxy\") pod \"kube-proxy-j8v2s\" (UID: \"1dacf8e6-abea-4185-b80a-e90ac57e7890\") " pod="kube-system/kube-proxy-j8v2s" Jan 24 12:15:08.823340 kubelet[2795]: I0124 12:15:08.823063 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1dacf8e6-abea-4185-b80a-e90ac57e7890-xtables-lock\") pod \"kube-proxy-j8v2s\" (UID: \"1dacf8e6-abea-4185-b80a-e90ac57e7890\") " pod="kube-system/kube-proxy-j8v2s" Jan 24 12:15:08.823340 kubelet[2795]: I0124 12:15:08.823208 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1dacf8e6-abea-4185-b80a-e90ac57e7890-lib-modules\") pod \"kube-proxy-j8v2s\" (UID: \"1dacf8e6-abea-4185-b80a-e90ac57e7890\") " pod="kube-system/kube-proxy-j8v2s" Jan 24 12:15:08.823340 kubelet[2795]: I0124 12:15:08.823239 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnmhh\" (UniqueName: \"kubernetes.io/projected/1dacf8e6-abea-4185-b80a-e90ac57e7890-kube-api-access-tnmhh\") pod \"kube-proxy-j8v2s\" (UID: \"1dacf8e6-abea-4185-b80a-e90ac57e7890\") " pod="kube-system/kube-proxy-j8v2s" Jan 24 12:15:09.017370 systemd[1]: Created slice kubepods-besteffort-pod173d2dcc_fc82_48b3_a242_c2f2c255dd32.slice - libcontainer container kubepods-besteffort-pod173d2dcc_fc82_48b3_a242_c2f2c255dd32.slice. Jan 24 12:15:09.023874 kubelet[2795]: I0124 12:15:09.023604 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb4f7\" (UniqueName: \"kubernetes.io/projected/173d2dcc-fc82-48b3-a242-c2f2c255dd32-kube-api-access-jb4f7\") pod \"tigera-operator-7dcd859c48-rlln9\" (UID: \"173d2dcc-fc82-48b3-a242-c2f2c255dd32\") " pod="tigera-operator/tigera-operator-7dcd859c48-rlln9" Jan 24 12:15:09.023874 kubelet[2795]: I0124 12:15:09.023710 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/173d2dcc-fc82-48b3-a242-c2f2c255dd32-var-lib-calico\") pod \"tigera-operator-7dcd859c48-rlln9\" (UID: \"173d2dcc-fc82-48b3-a242-c2f2c255dd32\") " pod="tigera-operator/tigera-operator-7dcd859c48-rlln9" Jan 24 12:15:09.043214 kubelet[2795]: E0124 12:15:09.042798 2795 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:15:09.095578 kubelet[2795]: E0124 12:15:09.095321 2795 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:15:09.096581 containerd[1599]: time="2026-01-24T12:15:09.096308561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j8v2s,Uid:1dacf8e6-abea-4185-b80a-e90ac57e7890,Namespace:kube-system,Attempt:0,}" Jan 24 12:15:09.172929 containerd[1599]: time="2026-01-24T12:15:09.172875048Z" level=info msg="connecting to shim 403cf7158679cb0ffd4537d43d9a00c6f9486d76be643adda340c041d533c62d" address="unix:///run/containerd/s/f5ca87e6fa9ff096c9ca96305c4f695fc4773271a628d208fb843628864a822c" namespace=k8s.io protocol=ttrpc version=3 Jan 24 12:15:09.280495 systemd[1]: Started cri-containerd-403cf7158679cb0ffd4537d43d9a00c6f9486d76be643adda340c041d533c62d.scope - libcontainer container 403cf7158679cb0ffd4537d43d9a00c6f9486d76be643adda340c041d533c62d. Jan 24 12:15:09.299000 audit: BPF prog-id=133 op=LOAD Jan 24 12:15:09.305208 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 24 12:15:09.305251 kernel: audit: type=1334 audit(1769256909.299:426): prog-id=133 op=LOAD Jan 24 12:15:09.300000 audit: BPF prog-id=134 op=LOAD Jan 24 12:15:09.311573 kernel: audit: type=1334 audit(1769256909.300:427): prog-id=134 op=LOAD Jan 24 12:15:09.300000 audit[2882]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=2871 pid=2882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:09.324604 containerd[1599]: time="2026-01-24T12:15:09.324278185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-rlln9,Uid:173d2dcc-fc82-48b3-a242-c2f2c255dd32,Namespace:tigera-operator,Attempt:0,}" Jan 24 12:15:09.327697 kernel: audit: type=1300 audit(1769256909.300:427): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=2871 pid=2882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:09.327744 kernel: audit: type=1327 audit(1769256909.300:427): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430336366373135383637396362306666643435333764343364396130 Jan 24 12:15:09.300000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430336366373135383637396362306666643435333764343364396130 Jan 24 12:15:09.300000 audit: BPF prog-id=134 op=UNLOAD Jan 24 12:15:09.349539 kernel: audit: type=1334 audit(1769256909.300:428): prog-id=134 op=UNLOAD Jan 24 12:15:09.349588 kernel: audit: type=1300 audit(1769256909.300:428): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2871 pid=2882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:09.300000 audit[2882]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2871 pid=2882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:09.365936 kernel: audit: type=1327 audit(1769256909.300:428): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430336366373135383637396362306666643435333764343364396130 Jan 24 12:15:09.300000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430336366373135383637396362306666643435333764343364396130 Jan 24 12:15:09.375980 kubelet[2795]: E0124 12:15:09.375797 2795 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:15:09.300000 audit: BPF prog-id=135 op=LOAD Jan 24 12:15:09.386252 kernel: audit: type=1334 audit(1769256909.300:429): prog-id=135 op=LOAD Jan 24 12:15:09.300000 audit[2882]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=2871 pid=2882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:09.402802 containerd[1599]: time="2026-01-24T12:15:09.402655896Z" level=info msg="connecting to shim 57c37b6b1c5668078b3d725250d864381f186f0ed1770c26d0bd6558e8938f50" address="unix:///run/containerd/s/983c53bc29fb149c3593dde326b6119be41fecb923d43afa55bbfdbb7c2f5b3a" namespace=k8s.io protocol=ttrpc version=3 Jan 24 12:15:09.404366 kernel: audit: type=1300 audit(1769256909.300:429): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=2871 pid=2882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:09.300000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430336366373135383637396362306666643435333764343364396130 Jan 24 12:15:09.419688 containerd[1599]: time="2026-01-24T12:15:09.419649577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j8v2s,Uid:1dacf8e6-abea-4185-b80a-e90ac57e7890,Namespace:kube-system,Attempt:0,} returns sandbox id \"403cf7158679cb0ffd4537d43d9a00c6f9486d76be643adda340c041d533c62d\"" Jan 24 12:15:09.421213 kernel: audit: type=1327 audit(1769256909.300:429): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430336366373135383637396362306666643435333764343364396130 Jan 24 12:15:09.301000 audit: BPF prog-id=136 op=LOAD Jan 24 12:15:09.301000 audit[2882]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=2871 pid=2882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:09.301000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430336366373135383637396362306666643435333764343364396130 Jan 24 12:15:09.301000 audit: BPF prog-id=136 op=UNLOAD Jan 24 12:15:09.301000 audit[2882]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2871 pid=2882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:09.301000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430336366373135383637396362306666643435333764343364396130 Jan 24 12:15:09.301000 audit: BPF prog-id=135 op=UNLOAD Jan 24 12:15:09.301000 audit[2882]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2871 pid=2882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:09.301000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430336366373135383637396362306666643435333764343364396130 Jan 24 12:15:09.301000 audit: BPF prog-id=137 op=LOAD Jan 24 12:15:09.301000 audit[2882]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=2871 pid=2882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:09.301000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430336366373135383637396362306666643435333764343364396130 Jan 24 12:15:09.422192 kubelet[2795]: E0124 12:15:09.422060 2795 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:15:09.430805 containerd[1599]: time="2026-01-24T12:15:09.430626599Z" level=info msg="CreateContainer within sandbox \"403cf7158679cb0ffd4537d43d9a00c6f9486d76be643adda340c041d533c62d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 24 12:15:09.454909 containerd[1599]: time="2026-01-24T12:15:09.454808570Z" level=info msg="Container cc2b857df54474fab65dc925b0f3f0cb5b12e2bcbb7db631449882545d8edf54: CDI devices from CRI Config.CDIDevices: []" Jan 24 12:15:09.458570 systemd[1]: Started cri-containerd-57c37b6b1c5668078b3d725250d864381f186f0ed1770c26d0bd6558e8938f50.scope - libcontainer container 57c37b6b1c5668078b3d725250d864381f186f0ed1770c26d0bd6558e8938f50. Jan 24 12:15:09.468217 containerd[1599]: time="2026-01-24T12:15:09.467866292Z" level=info msg="CreateContainer within sandbox \"403cf7158679cb0ffd4537d43d9a00c6f9486d76be643adda340c041d533c62d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cc2b857df54474fab65dc925b0f3f0cb5b12e2bcbb7db631449882545d8edf54\"" Jan 24 12:15:09.471188 containerd[1599]: time="2026-01-24T12:15:09.471063985Z" level=info msg="StartContainer for \"cc2b857df54474fab65dc925b0f3f0cb5b12e2bcbb7db631449882545d8edf54\"" Jan 24 12:15:09.473972 containerd[1599]: time="2026-01-24T12:15:09.473858007Z" level=info msg="connecting to shim cc2b857df54474fab65dc925b0f3f0cb5b12e2bcbb7db631449882545d8edf54" address="unix:///run/containerd/s/f5ca87e6fa9ff096c9ca96305c4f695fc4773271a628d208fb843628864a822c" protocol=ttrpc version=3 Jan 24 12:15:09.479000 audit: BPF prog-id=138 op=LOAD Jan 24 12:15:09.480000 audit: BPF prog-id=139 op=LOAD Jan 24 12:15:09.480000 audit[2927]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2915 pid=2927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:09.480000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537633337623662316335363638303738623364373235323530643836 Jan 24 12:15:09.480000 audit: BPF prog-id=139 op=UNLOAD Jan 24 12:15:09.480000 audit[2927]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2915 pid=2927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:09.480000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537633337623662316335363638303738623364373235323530643836 Jan 24 12:15:09.480000 audit: BPF prog-id=140 op=LOAD Jan 24 12:15:09.480000 audit[2927]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2915 pid=2927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:09.480000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537633337623662316335363638303738623364373235323530643836 Jan 24 12:15:09.481000 audit: BPF prog-id=141 op=LOAD Jan 24 12:15:09.481000 audit[2927]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2915 pid=2927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:09.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537633337623662316335363638303738623364373235323530643836 Jan 24 12:15:09.481000 audit: BPF prog-id=141 op=UNLOAD Jan 24 12:15:09.481000 audit[2927]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2915 pid=2927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:09.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537633337623662316335363638303738623364373235323530643836 Jan 24 12:15:09.481000 audit: BPF prog-id=140 op=UNLOAD Jan 24 12:15:09.481000 audit[2927]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2915 pid=2927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:09.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537633337623662316335363638303738623364373235323530643836 Jan 24 12:15:09.481000 audit: BPF prog-id=142 op=LOAD Jan 24 12:15:09.481000 audit[2927]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2915 pid=2927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:09.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537633337623662316335363638303738623364373235323530643836 Jan 24 12:15:09.525663 systemd[1]: Started cri-containerd-cc2b857df54474fab65dc925b0f3f0cb5b12e2bcbb7db631449882545d8edf54.scope - libcontainer container cc2b857df54474fab65dc925b0f3f0cb5b12e2bcbb7db631449882545d8edf54. Jan 24 12:15:09.556076 containerd[1599]: time="2026-01-24T12:15:09.555852974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-rlln9,Uid:173d2dcc-fc82-48b3-a242-c2f2c255dd32,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"57c37b6b1c5668078b3d725250d864381f186f0ed1770c26d0bd6558e8938f50\"" Jan 24 12:15:09.563621 containerd[1599]: time="2026-01-24T12:15:09.563022884Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 24 12:15:09.607000 audit: BPF prog-id=143 op=LOAD Jan 24 12:15:09.607000 audit[2946]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=2871 pid=2946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:09.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363326238353764663534343734666162363564633932356230663366 Jan 24 12:15:09.607000 audit: BPF prog-id=144 op=LOAD Jan 24 12:15:09.607000 audit[2946]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=2871 pid=2946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:09.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363326238353764663534343734666162363564633932356230663366 Jan 24 12:15:09.607000 audit: BPF prog-id=144 op=UNLOAD Jan 24 12:15:09.607000 audit[2946]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2871 pid=2946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:09.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363326238353764663534343734666162363564633932356230663366 Jan 24 12:15:09.607000 audit: BPF prog-id=143 op=UNLOAD Jan 24 12:15:09.607000 audit[2946]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2871 pid=2946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:09.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363326238353764663534343734666162363564633932356230663366 Jan 24 12:15:09.607000 audit: BPF prog-id=145 op=LOAD Jan 24 12:15:09.607000 audit[2946]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=2871 pid=2946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:09.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363326238353764663534343734666162363564633932356230663366 Jan 24 12:15:09.642772 containerd[1599]: time="2026-01-24T12:15:09.642589588Z" level=info msg="StartContainer for \"cc2b857df54474fab65dc925b0f3f0cb5b12e2bcbb7db631449882545d8edf54\" returns successfully" Jan 24 12:15:09.903000 audit[3019]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3019 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:15:09.903000 audit[3019]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffed83838f0 a2=0 a3=7ffed83838dc items=0 ppid=2961 pid=3019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:09.903000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 24 12:15:09.908000 audit[3020]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=3020 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:15:09.908000 audit[3020]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc6e545c10 a2=0 a3=7ffc6e545bfc items=0 ppid=2961 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:09.908000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 24 12:15:09.914000 audit[3021]: NETFILTER_CFG table=nat:56 family=2 entries=1 op=nft_register_chain pid=3021 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:15:09.914000 audit[3021]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdccb195e0 a2=0 a3=7ffdccb195cc items=0 ppid=2961 pid=3021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:09.914000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 24 12:15:09.914000 audit[3022]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=3022 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:15:09.914000 audit[3022]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff1cb683e0 a2=0 a3=7fff1cb683cc items=0 ppid=2961 pid=3022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:09.914000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 24 12:15:09.921000 audit[3023]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=3023 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:15:09.921000 audit[3023]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffff852a860 a2=0 a3=7ffff852a84c items=0 ppid=2961 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:09.921000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 24 12:15:09.923000 audit[3024]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3024 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:15:09.923000 audit[3024]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff6f150070 a2=0 a3=7fff6f15005c items=0 ppid=2961 pid=3024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:09.923000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 24 12:15:10.010000 audit[3025]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3025 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:15:10.010000 audit[3025]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd9ba58550 a2=0 a3=7ffd9ba5853c items=0 ppid=2961 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:10.010000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 24 12:15:10.016000 audit[3027]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3027 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:15:10.016000 audit[3027]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffdfeda43c0 a2=0 a3=7ffdfeda43ac items=0 ppid=2961 pid=3027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:10.016000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 24 12:15:10.026000 audit[3030]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3030 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:15:10.026000 audit[3030]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffde08c7d90 a2=0 a3=7ffde08c7d7c items=0 ppid=2961 pid=3030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:10.026000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 24 12:15:10.031000 audit[3031]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3031 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:15:10.031000 audit[3031]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe8d0d8bf0 a2=0 a3=7ffe8d0d8bdc items=0 ppid=2961 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:10.031000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 24 12:15:10.039000 audit[3033]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3033 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:15:10.039000 audit[3033]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffde7411150 a2=0 a3=7ffde741113c items=0 ppid=2961 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:10.039000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 24 12:15:10.043000 audit[3034]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3034 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:15:10.043000 audit[3034]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd73504f20 a2=0 a3=7ffd73504f0c items=0 ppid=2961 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:10.043000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 24 12:15:10.052000 audit[3036]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3036 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:15:10.052000 audit[3036]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd11e95970 a2=0 a3=7ffd11e9595c items=0 ppid=2961 pid=3036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:10.052000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 24 12:15:10.064000 audit[3039]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3039 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:15:10.064000 audit[3039]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe111ab090 a2=0 a3=7ffe111ab07c items=0 ppid=2961 pid=3039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:10.064000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 24 12:15:10.069000 audit[3040]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3040 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:15:10.069000 audit[3040]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff84dca730 a2=0 a3=7fff84dca71c items=0 ppid=2961 pid=3040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:10.069000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 24 12:15:10.078000 audit[3042]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3042 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:15:10.078000 audit[3042]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffde32643a0 a2=0 a3=7ffde326438c items=0 ppid=2961 pid=3042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:10.078000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 24 12:15:10.082000 audit[3043]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3043 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:15:10.082000 audit[3043]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdc9c17a20 a2=0 a3=7ffdc9c17a0c items=0 ppid=2961 pid=3043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:10.082000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 24 12:15:10.090000 audit[3045]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3045 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:15:10.090000 audit[3045]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcd30833d0 a2=0 a3=7ffcd30833bc items=0 ppid=2961 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:10.090000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 24 12:15:10.103000 audit[3048]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3048 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:15:10.103000 audit[3048]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff0313e390 a2=0 a3=7fff0313e37c items=0 ppid=2961 pid=3048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:10.103000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 24 12:15:10.117000 audit[3051]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3051 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:15:10.117000 audit[3051]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc5b070080 a2=0 a3=7ffc5b07006c items=0 ppid=2961 pid=3051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:10.117000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 24 12:15:10.121000 audit[3052]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3052 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:15:10.121000 audit[3052]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe0e7ee760 a2=0 a3=7ffe0e7ee74c items=0 ppid=2961 pid=3052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:10.121000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 24 12:15:10.130000 audit[3054]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3054 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:15:10.130000 audit[3054]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc11a92810 a2=0 a3=7ffc11a927fc items=0 ppid=2961 pid=3054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:10.130000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 24 12:15:10.141000 audit[3057]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3057 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:15:10.141000 audit[3057]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffeab0587d0 a2=0 a3=7ffeab0587bc items=0 ppid=2961 pid=3057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:10.141000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 24 12:15:10.146000 audit[3058]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3058 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:15:10.146000 audit[3058]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd589e7320 a2=0 a3=7ffd589e730c items=0 ppid=2961 pid=3058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:10.146000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 24 12:15:10.154000 audit[3060]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3060 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:15:10.154000 audit[3060]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7fff26356ea0 a2=0 a3=7fff26356e8c items=0 ppid=2961 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:10.154000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 24 12:15:10.208000 audit[3066]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3066 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:15:10.208000 audit[3066]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff9532ff20 a2=0 a3=7fff9532ff0c items=0 ppid=2961 pid=3066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:10.208000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:15:10.227000 audit[3066]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3066 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:15:10.227000 audit[3066]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7fff9532ff20 a2=0 a3=7fff9532ff0c items=0 ppid=2961 pid=3066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:10.227000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:15:10.231000 audit[3071]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3071 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:15:10.231000 audit[3071]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fffa89db390 a2=0 a3=7fffa89db37c items=0 ppid=2961 pid=3071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:10.231000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 24 12:15:10.240000 audit[3073]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3073 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:15:10.240000 audit[3073]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc13e946b0 a2=0 a3=7ffc13e9469c items=0 ppid=2961 pid=3073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:10.240000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 24 12:15:10.252000 audit[3076]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3076 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:15:10.252000 audit[3076]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc008480f0 a2=0 a3=7ffc008480dc items=0 ppid=2961 pid=3076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:10.252000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 24 12:15:10.255000 audit[3077]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3077 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:15:10.255000 audit[3077]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcf87afb50 a2=0 a3=7ffcf87afb3c items=0 ppid=2961 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:10.255000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 24 12:15:10.263000 audit[3079]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3079 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:15:10.263000 audit[3079]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe71ecb570 a2=0 a3=7ffe71ecb55c items=0 ppid=2961 pid=3079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:10.263000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 24 12:15:10.267000 audit[3080]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3080 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:15:10.267000 audit[3080]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff6afd61e0 a2=0 a3=7fff6afd61cc items=0 ppid=2961 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:10.267000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 24 12:15:10.275000 audit[3082]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3082 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:15:10.275000 audit[3082]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffea6291a40 a2=0 a3=7ffea6291a2c items=0 ppid=2961 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:10.275000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 24 12:15:10.289000 audit[3085]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3085 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:15:10.289000 audit[3085]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffe18262c00 a2=0 a3=7ffe18262bec items=0 ppid=2961 pid=3085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:10.289000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 24 12:15:10.294000 audit[3086]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3086 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:15:10.294000 audit[3086]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffc4331fd0 a2=0 a3=7fffc4331fbc items=0 ppid=2961 pid=3086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:10.294000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 24 12:15:10.301000 audit[3088]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3088 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:15:10.301000 audit[3088]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc5936f210 a2=0 a3=7ffc5936f1fc items=0 ppid=2961 pid=3088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:10.301000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 24 12:15:10.304000 audit[3089]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3089 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:15:10.304000 audit[3089]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd0c3f0000 a2=0 a3=7ffd0c3effec items=0 ppid=2961 pid=3089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:10.304000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 24 12:15:10.315000 audit[3091]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3091 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:15:10.315000 audit[3091]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe17bc08a0 a2=0 a3=7ffe17bc088c items=0 ppid=2961 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:10.315000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 24 12:15:10.327000 audit[3094]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3094 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:15:10.327000 audit[3094]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffed7e80890 a2=0 a3=7ffed7e8087c items=0 ppid=2961 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:10.327000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 24 12:15:10.340000 audit[3097]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3097 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:15:10.340000 audit[3097]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd506edf10 a2=0 a3=7ffd506edefc items=0 ppid=2961 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:10.340000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 24 12:15:10.345000 audit[3098]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3098 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:15:10.345000 audit[3098]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd859e8990 a2=0 a3=7ffd859e897c items=0 ppid=2961 pid=3098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:10.345000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 24 12:15:10.352000 audit[3100]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3100 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:15:10.352000 audit[3100]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe9c595270 a2=0 a3=7ffe9c59525c items=0 ppid=2961 pid=3100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:10.352000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 24 12:15:10.365000 audit[3103]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3103 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:15:10.365000 audit[3103]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff2303ad40 a2=0 a3=7fff2303ad2c items=0 ppid=2961 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:10.365000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 24 12:15:10.370000 audit[3104]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3104 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:15:10.370000 audit[3104]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd57992220 a2=0 a3=7ffd5799220c items=0 ppid=2961 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:10.370000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 24 12:15:10.378000 audit[3106]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3106 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:15:10.378000 audit[3106]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffd455aba90 a2=0 a3=7ffd455aba7c items=0 ppid=2961 pid=3106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:10.378000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 24 12:15:10.385000 audit[3108]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3108 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:15:10.385000 audit[3108]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffccff20f30 a2=0 a3=7ffccff20f1c items=0 ppid=2961 pid=3108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:10.385000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 24 12:15:10.392000 audit[3113]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3113 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:15:10.392000 audit[3113]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe482d6d60 a2=0 a3=7ffe482d6d4c items=0 ppid=2961 pid=3113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:10.392000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 24 12:15:10.396833 kubelet[2795]: E0124 12:15:10.395507 2795 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:15:10.422000 audit[3116]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3116 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:15:10.422000 audit[3116]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc52860190 a2=0 a3=7ffc5286017c items=0 ppid=2961 pid=3116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:10.422000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 24 12:15:10.429836 kubelet[2795]: I0124 12:15:10.429715 2795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-j8v2s" podStartSLOduration=2.42969637 podStartE2EDuration="2.42969637s" podCreationTimestamp="2026-01-24 12:15:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 12:15:10.429625157 +0000 UTC m=+7.294067469" watchObservedRunningTime="2026-01-24 12:15:10.42969637 +0000 UTC m=+7.294138682" Jan 24 12:15:10.436000 audit[3118]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3118 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 24 12:15:10.436000 audit[3118]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7fff0a82e4c0 a2=0 a3=7fff0a82e4ac items=0 ppid=2961 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:10.436000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:15:10.437000 audit[3118]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3118 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 24 12:15:10.437000 audit[3118]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fff0a82e4c0 a2=0 a3=7fff0a82e4ac items=0 ppid=2961 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:10.437000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:15:10.464340 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3891161099.mount: Deactivated successfully. Jan 24 12:15:11.384155 containerd[1599]: time="2026-01-24T12:15:11.384021739Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 12:15:11.386001 containerd[1599]: time="2026-01-24T12:15:11.385846861Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 24 12:15:11.387947 containerd[1599]: time="2026-01-24T12:15:11.387840582Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 12:15:11.392526 containerd[1599]: time="2026-01-24T12:15:11.392056590Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 12:15:11.395873 containerd[1599]: time="2026-01-24T12:15:11.395694465Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.832419191s" Jan 24 12:15:11.395873 containerd[1599]: time="2026-01-24T12:15:11.395783811Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 24 12:15:11.398680 containerd[1599]: time="2026-01-24T12:15:11.398633266Z" level=info msg="CreateContainer within sandbox \"57c37b6b1c5668078b3d725250d864381f186f0ed1770c26d0bd6558e8938f50\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 24 12:15:11.414540 containerd[1599]: time="2026-01-24T12:15:11.414200169Z" level=info msg="Container b5cbbd3663603c7ff7294dda74d7381f1b7ef004979fbdbd5f0693d9f4f95f14: CDI devices from CRI Config.CDIDevices: []" Jan 24 12:15:11.424392 containerd[1599]: time="2026-01-24T12:15:11.424313672Z" level=info msg="CreateContainer within sandbox \"57c37b6b1c5668078b3d725250d864381f186f0ed1770c26d0bd6558e8938f50\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b5cbbd3663603c7ff7294dda74d7381f1b7ef004979fbdbd5f0693d9f4f95f14\"" Jan 24 12:15:11.425554 containerd[1599]: time="2026-01-24T12:15:11.425245066Z" level=info msg="StartContainer for \"b5cbbd3663603c7ff7294dda74d7381f1b7ef004979fbdbd5f0693d9f4f95f14\"" Jan 24 12:15:11.426949 containerd[1599]: time="2026-01-24T12:15:11.426889321Z" level=info msg="connecting to shim b5cbbd3663603c7ff7294dda74d7381f1b7ef004979fbdbd5f0693d9f4f95f14" address="unix:///run/containerd/s/983c53bc29fb149c3593dde326b6119be41fecb923d43afa55bbfdbb7c2f5b3a" protocol=ttrpc version=3 Jan 24 12:15:11.465530 systemd[1]: Started cri-containerd-b5cbbd3663603c7ff7294dda74d7381f1b7ef004979fbdbd5f0693d9f4f95f14.scope - libcontainer container b5cbbd3663603c7ff7294dda74d7381f1b7ef004979fbdbd5f0693d9f4f95f14. Jan 24 12:15:11.486000 audit: BPF prog-id=146 op=LOAD Jan 24 12:15:11.487000 audit: BPF prog-id=147 op=LOAD Jan 24 12:15:11.487000 audit[3123]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2915 pid=3123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:11.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235636262643336363336303363376666373239346464613734643733 Jan 24 12:15:11.488000 audit: BPF prog-id=147 op=UNLOAD Jan 24 12:15:11.488000 audit[3123]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2915 pid=3123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:11.488000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235636262643336363336303363376666373239346464613734643733 Jan 24 12:15:11.488000 audit: BPF prog-id=148 op=LOAD Jan 24 12:15:11.488000 audit[3123]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2915 pid=3123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:11.488000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235636262643336363336303363376666373239346464613734643733 Jan 24 12:15:11.488000 audit: BPF prog-id=149 op=LOAD Jan 24 12:15:11.488000 audit[3123]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2915 pid=3123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:11.488000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235636262643336363336303363376666373239346464613734643733 Jan 24 12:15:11.488000 audit: BPF prog-id=149 op=UNLOAD Jan 24 12:15:11.488000 audit[3123]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2915 pid=3123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:11.488000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235636262643336363336303363376666373239346464613734643733 Jan 24 12:15:11.488000 audit: BPF prog-id=148 op=UNLOAD Jan 24 12:15:11.488000 audit[3123]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2915 pid=3123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:11.488000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235636262643336363336303363376666373239346464613734643733 Jan 24 12:15:11.488000 audit: BPF prog-id=150 op=LOAD Jan 24 12:15:11.488000 audit[3123]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2915 pid=3123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:11.488000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235636262643336363336303363376666373239346464613734643733 Jan 24 12:15:11.520013 containerd[1599]: time="2026-01-24T12:15:11.519792960Z" level=info msg="StartContainer for \"b5cbbd3663603c7ff7294dda74d7381f1b7ef004979fbdbd5f0693d9f4f95f14\" returns successfully" Jan 24 12:15:12.513769 kubelet[2795]: E0124 12:15:12.513677 2795 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:15:12.539245 kubelet[2795]: I0124 12:15:12.538998 2795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-rlln9" podStartSLOduration=2.703010437 podStartE2EDuration="4.538980798s" podCreationTimestamp="2026-01-24 12:15:08 +0000 UTC" firstStartedPulling="2026-01-24 12:15:09.561004654 +0000 UTC m=+6.425446966" lastFinishedPulling="2026-01-24 12:15:11.396975015 +0000 UTC m=+8.261417327" observedRunningTime="2026-01-24 12:15:12.427244089 +0000 UTC m=+9.291686401" watchObservedRunningTime="2026-01-24 12:15:12.538980798 +0000 UTC m=+9.403423110" Jan 24 12:15:13.416776 kubelet[2795]: E0124 12:15:13.416693 2795 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:15:13.996751 kubelet[2795]: E0124 12:15:13.996596 2795 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:15:14.417707 kubelet[2795]: E0124 12:15:14.416374 2795 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:15:17.493961 sudo[1842]: pam_unix(sudo:session): session closed for user root Jan 24 12:15:17.492000 audit[1842]: USER_END pid=1842 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 12:15:17.498483 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 24 12:15:17.498552 kernel: audit: type=1106 audit(1769256917.492:506): pid=1842 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 12:15:17.493000 audit[1842]: CRED_DISP pid=1842 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 12:15:17.519817 kernel: audit: type=1104 audit(1769256917.493:507): pid=1842 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 12:15:17.520568 sshd[1841]: Connection closed by 10.0.0.1 port 37498 Jan 24 12:15:17.520912 sshd-session[1835]: pam_unix(sshd:session): session closed for user core Jan 24 12:15:17.526578 systemd-logind[1577]: Session 10 logged out. Waiting for processes to exit. Jan 24 12:15:17.522000 audit[1835]: USER_END pid=1835 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:15:17.545256 kernel: audit: type=1106 audit(1769256917.522:508): pid=1835 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:15:17.546343 systemd[1]: sshd@8-10.0.0.151:22-10.0.0.1:37498.service: Deactivated successfully. Jan 24 12:15:17.550039 systemd[1]: session-10.scope: Deactivated successfully. Jan 24 12:15:17.550640 systemd[1]: session-10.scope: Consumed 5.770s CPU time, 212.5M memory peak. Jan 24 12:15:17.554747 systemd-logind[1577]: Removed session 10. Jan 24 12:15:17.522000 audit[1835]: CRED_DISP pid=1835 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:15:17.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.151:22-10.0.0.1:37498 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:15:17.590255 kernel: audit: type=1104 audit(1769256917.522:509): pid=1835 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:15:17.590474 kernel: audit: type=1131 audit(1769256917.545:510): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.151:22-10.0.0.1:37498 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:15:18.026000 audit[3216]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3216 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:15:18.039222 kernel: audit: type=1325 audit(1769256918.026:511): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3216 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:15:18.026000 audit[3216]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffed559a280 a2=0 a3=7ffed559a26c items=0 ppid=2961 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:18.064357 kernel: audit: type=1300 audit(1769256918.026:511): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffed559a280 a2=0 a3=7ffed559a26c items=0 ppid=2961 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:18.026000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:15:18.078379 kernel: audit: type=1327 audit(1769256918.026:511): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:15:18.049000 audit[3216]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3216 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:15:18.049000 audit[3216]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffed559a280 a2=0 a3=0 items=0 ppid=2961 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:18.102373 kernel: audit: type=1325 audit(1769256918.049:512): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3216 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:15:18.103807 kernel: audit: type=1300 audit(1769256918.049:512): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffed559a280 a2=0 a3=0 items=0 ppid=2961 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:18.049000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:15:18.082000 audit[3218]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3218 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:15:18.082000 audit[3218]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffca3e87580 a2=0 a3=7ffca3e8756c items=0 ppid=2961 pid=3218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:18.082000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:15:18.109000 audit[3218]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3218 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:15:18.109000 audit[3218]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffca3e87580 a2=0 a3=0 items=0 ppid=2961 pid=3218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:18.109000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:15:20.292000 audit[3220]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3220 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:15:20.292000 audit[3220]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffc90de40c0 a2=0 a3=7ffc90de40ac items=0 ppid=2961 pid=3220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:20.292000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:15:20.307000 audit[3220]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3220 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:15:20.307000 audit[3220]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc90de40c0 a2=0 a3=0 items=0 ppid=2961 pid=3220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:20.307000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:15:20.348000 audit[3222]: NETFILTER_CFG table=filter:111 family=2 entries=19 op=nft_register_rule pid=3222 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:15:20.348000 audit[3222]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc077112f0 a2=0 a3=7ffc077112dc items=0 ppid=2961 pid=3222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:20.348000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:15:20.357000 audit[3222]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3222 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:15:20.357000 audit[3222]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc077112f0 a2=0 a3=0 items=0 ppid=2961 pid=3222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:20.357000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:15:21.375000 audit[3224]: NETFILTER_CFG table=filter:113 family=2 entries=20 op=nft_register_rule pid=3224 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:15:21.375000 audit[3224]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe1551d4e0 a2=0 a3=7ffe1551d4cc items=0 ppid=2961 pid=3224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:21.375000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:15:21.379000 audit[3224]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3224 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:15:21.379000 audit[3224]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe1551d4e0 a2=0 a3=0 items=0 ppid=2961 pid=3224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:21.379000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:15:22.176943 systemd[1]: Created slice kubepods-besteffort-pod67c4694e_31cf_47db_b27d_6f66679e2b3c.slice - libcontainer container kubepods-besteffort-pod67c4694e_31cf_47db_b27d_6f66679e2b3c.slice. Jan 24 12:15:22.233059 kubelet[2795]: I0124 12:15:22.233022 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/67c4694e-31cf-47db-b27d-6f66679e2b3c-typha-certs\") pod \"calico-typha-8b64d44d5-z7nzz\" (UID: \"67c4694e-31cf-47db-b27d-6f66679e2b3c\") " pod="calico-system/calico-typha-8b64d44d5-z7nzz" Jan 24 12:15:22.234044 kubelet[2795]: I0124 12:15:22.233427 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwb9v\" (UniqueName: \"kubernetes.io/projected/67c4694e-31cf-47db-b27d-6f66679e2b3c-kube-api-access-pwb9v\") pod \"calico-typha-8b64d44d5-z7nzz\" (UID: \"67c4694e-31cf-47db-b27d-6f66679e2b3c\") " pod="calico-system/calico-typha-8b64d44d5-z7nzz" Jan 24 12:15:22.234044 kubelet[2795]: I0124 12:15:22.233999 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67c4694e-31cf-47db-b27d-6f66679e2b3c-tigera-ca-bundle\") pod \"calico-typha-8b64d44d5-z7nzz\" (UID: \"67c4694e-31cf-47db-b27d-6f66679e2b3c\") " pod="calico-system/calico-typha-8b64d44d5-z7nzz" Jan 24 12:15:22.395735 systemd[1]: Created slice kubepods-besteffort-pod35eae9ca_3ee1_4fe5_960b_84f302455698.slice - libcontainer container kubepods-besteffort-pod35eae9ca_3ee1_4fe5_960b_84f302455698.slice. Jan 24 12:15:22.399000 audit[3228]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3228 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:15:22.399000 audit[3228]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fffe73e38a0 a2=0 a3=7fffe73e388c items=0 ppid=2961 pid=3228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:22.399000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:15:22.404000 audit[3228]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3228 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:15:22.404000 audit[3228]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffe73e38a0 a2=0 a3=0 items=0 ppid=2961 pid=3228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:22.404000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:15:22.436009 kubelet[2795]: I0124 12:15:22.435868 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/35eae9ca-3ee1-4fe5-960b-84f302455698-var-run-calico\") pod \"calico-node-l6gtl\" (UID: \"35eae9ca-3ee1-4fe5-960b-84f302455698\") " pod="calico-system/calico-node-l6gtl" Jan 24 12:15:22.436603 kubelet[2795]: I0124 12:15:22.436251 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/35eae9ca-3ee1-4fe5-960b-84f302455698-cni-log-dir\") pod \"calico-node-l6gtl\" (UID: \"35eae9ca-3ee1-4fe5-960b-84f302455698\") " pod="calico-system/calico-node-l6gtl" Jan 24 12:15:22.436603 kubelet[2795]: I0124 12:15:22.436271 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35eae9ca-3ee1-4fe5-960b-84f302455698-lib-modules\") pod \"calico-node-l6gtl\" (UID: \"35eae9ca-3ee1-4fe5-960b-84f302455698\") " pod="calico-system/calico-node-l6gtl" Jan 24 12:15:22.436603 kubelet[2795]: I0124 12:15:22.436287 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35eae9ca-3ee1-4fe5-960b-84f302455698-tigera-ca-bundle\") pod \"calico-node-l6gtl\" (UID: \"35eae9ca-3ee1-4fe5-960b-84f302455698\") " pod="calico-system/calico-node-l6gtl" Jan 24 12:15:22.436603 kubelet[2795]: I0124 12:15:22.436307 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/35eae9ca-3ee1-4fe5-960b-84f302455698-flexvol-driver-host\") pod \"calico-node-l6gtl\" (UID: \"35eae9ca-3ee1-4fe5-960b-84f302455698\") " pod="calico-system/calico-node-l6gtl" Jan 24 12:15:22.436603 kubelet[2795]: I0124 12:15:22.436338 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/35eae9ca-3ee1-4fe5-960b-84f302455698-policysync\") pod \"calico-node-l6gtl\" (UID: \"35eae9ca-3ee1-4fe5-960b-84f302455698\") " pod="calico-system/calico-node-l6gtl" Jan 24 12:15:22.437213 kubelet[2795]: I0124 12:15:22.436362 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/35eae9ca-3ee1-4fe5-960b-84f302455698-cni-bin-dir\") pod \"calico-node-l6gtl\" (UID: \"35eae9ca-3ee1-4fe5-960b-84f302455698\") " pod="calico-system/calico-node-l6gtl" Jan 24 12:15:22.437213 kubelet[2795]: I0124 12:15:22.436376 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/35eae9ca-3ee1-4fe5-960b-84f302455698-cni-net-dir\") pod \"calico-node-l6gtl\" (UID: \"35eae9ca-3ee1-4fe5-960b-84f302455698\") " pod="calico-system/calico-node-l6gtl" Jan 24 12:15:22.437213 kubelet[2795]: I0124 12:15:22.436392 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/35eae9ca-3ee1-4fe5-960b-84f302455698-node-certs\") pod \"calico-node-l6gtl\" (UID: \"35eae9ca-3ee1-4fe5-960b-84f302455698\") " pod="calico-system/calico-node-l6gtl" Jan 24 12:15:22.437213 kubelet[2795]: I0124 12:15:22.436407 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhxlr\" (UniqueName: \"kubernetes.io/projected/35eae9ca-3ee1-4fe5-960b-84f302455698-kube-api-access-jhxlr\") pod \"calico-node-l6gtl\" (UID: \"35eae9ca-3ee1-4fe5-960b-84f302455698\") " pod="calico-system/calico-node-l6gtl" Jan 24 12:15:22.437213 kubelet[2795]: I0124 12:15:22.436420 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/35eae9ca-3ee1-4fe5-960b-84f302455698-var-lib-calico\") pod \"calico-node-l6gtl\" (UID: \"35eae9ca-3ee1-4fe5-960b-84f302455698\") " pod="calico-system/calico-node-l6gtl" Jan 24 12:15:22.437310 kubelet[2795]: I0124 12:15:22.436432 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35eae9ca-3ee1-4fe5-960b-84f302455698-xtables-lock\") pod \"calico-node-l6gtl\" (UID: \"35eae9ca-3ee1-4fe5-960b-84f302455698\") " pod="calico-system/calico-node-l6gtl" Jan 24 12:15:22.485968 kubelet[2795]: E0124 12:15:22.485703 2795 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:15:22.486425 containerd[1599]: time="2026-01-24T12:15:22.486356346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8b64d44d5-z7nzz,Uid:67c4694e-31cf-47db-b27d-6f66679e2b3c,Namespace:calico-system,Attempt:0,}" Jan 24 12:15:22.536062 containerd[1599]: time="2026-01-24T12:15:22.535729117Z" level=info msg="connecting to shim 154e98d4db0793feb37356bab71755f2f67b08d7eb3d56523e7f066dda895da1" address="unix:///run/containerd/s/06d32690ea41d48d445a11a7a943cd6d2c2c8d0a25132334f3e1a436b2064c60" namespace=k8s.io protocol=ttrpc version=3 Jan 24 12:15:22.563370 kubelet[2795]: E0124 12:15:22.561894 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.563370 kubelet[2795]: W0124 12:15:22.561918 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.563370 kubelet[2795]: E0124 12:15:22.562520 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.581017 kubelet[2795]: E0124 12:15:22.580638 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.581017 kubelet[2795]: W0124 12:15:22.580661 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.581017 kubelet[2795]: E0124 12:15:22.580682 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.583155 kubelet[2795]: E0124 12:15:22.582408 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vfhsw" podUID="d101ff7c-9560-44ae-a339-4a5dc1053aeb" Jan 24 12:15:22.628540 kubelet[2795]: E0124 12:15:22.628288 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.628540 kubelet[2795]: W0124 12:15:22.628309 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.628540 kubelet[2795]: E0124 12:15:22.628326 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.629525 kubelet[2795]: E0124 12:15:22.629199 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.629525 kubelet[2795]: W0124 12:15:22.629213 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.629525 kubelet[2795]: E0124 12:15:22.629223 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.629525 kubelet[2795]: E0124 12:15:22.629513 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.629525 kubelet[2795]: W0124 12:15:22.629522 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.629525 kubelet[2795]: E0124 12:15:22.629531 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.632208 kubelet[2795]: E0124 12:15:22.630868 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.632208 kubelet[2795]: W0124 12:15:22.630880 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.632208 kubelet[2795]: E0124 12:15:22.630889 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.632208 kubelet[2795]: E0124 12:15:22.631429 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.632208 kubelet[2795]: W0124 12:15:22.631496 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.632208 kubelet[2795]: E0124 12:15:22.631507 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.633778 kubelet[2795]: E0124 12:15:22.632867 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.633778 kubelet[2795]: W0124 12:15:22.632880 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.633778 kubelet[2795]: E0124 12:15:22.632889 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.634263 kubelet[2795]: E0124 12:15:22.633946 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.634263 kubelet[2795]: W0124 12:15:22.633958 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.634263 kubelet[2795]: E0124 12:15:22.633968 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.634814 systemd[1]: Started cri-containerd-154e98d4db0793feb37356bab71755f2f67b08d7eb3d56523e7f066dda895da1.scope - libcontainer container 154e98d4db0793feb37356bab71755f2f67b08d7eb3d56523e7f066dda895da1. Jan 24 12:15:22.635519 kubelet[2795]: E0124 12:15:22.635305 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.635519 kubelet[2795]: W0124 12:15:22.635319 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.635519 kubelet[2795]: E0124 12:15:22.635333 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.637532 kubelet[2795]: E0124 12:15:22.636274 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.637532 kubelet[2795]: W0124 12:15:22.636285 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.637532 kubelet[2795]: E0124 12:15:22.636294 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.637532 kubelet[2795]: E0124 12:15:22.636546 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.637532 kubelet[2795]: W0124 12:15:22.636560 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.637532 kubelet[2795]: E0124 12:15:22.636575 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.637532 kubelet[2795]: E0124 12:15:22.637000 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.637532 kubelet[2795]: W0124 12:15:22.637008 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.637532 kubelet[2795]: E0124 12:15:22.637016 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.638335 kubelet[2795]: E0124 12:15:22.638070 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.638758 kubelet[2795]: W0124 12:15:22.638543 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.639023 kubelet[2795]: E0124 12:15:22.638936 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.641253 kubelet[2795]: E0124 12:15:22.640223 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.641253 kubelet[2795]: W0124 12:15:22.640242 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.641253 kubelet[2795]: E0124 12:15:22.640253 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.641798 kubelet[2795]: E0124 12:15:22.641644 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.641798 kubelet[2795]: W0124 12:15:22.641697 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.641798 kubelet[2795]: E0124 12:15:22.641708 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.643266 kubelet[2795]: E0124 12:15:22.642966 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.643266 kubelet[2795]: W0124 12:15:22.642984 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.643266 kubelet[2795]: E0124 12:15:22.642999 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.643729 kubelet[2795]: E0124 12:15:22.643714 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.643818 kubelet[2795]: W0124 12:15:22.643803 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.643867 kubelet[2795]: E0124 12:15:22.643857 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.644714 kubelet[2795]: E0124 12:15:22.644696 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.644798 kubelet[2795]: W0124 12:15:22.644784 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.644863 kubelet[2795]: E0124 12:15:22.644850 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.645700 kubelet[2795]: E0124 12:15:22.645683 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.646622 kubelet[2795]: W0124 12:15:22.646602 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.646701 kubelet[2795]: E0124 12:15:22.646685 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.649941 kubelet[2795]: E0124 12:15:22.649924 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.650024 kubelet[2795]: W0124 12:15:22.650010 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.650578 kubelet[2795]: E0124 12:15:22.650391 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.650977 kubelet[2795]: E0124 12:15:22.650963 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.651065 kubelet[2795]: W0124 12:15:22.651050 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.651865 kubelet[2795]: E0124 12:15:22.651264 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.652031 kubelet[2795]: E0124 12:15:22.652017 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.652336 kubelet[2795]: W0124 12:15:22.652318 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.652426 kubelet[2795]: E0124 12:15:22.652409 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.652650 kubelet[2795]: I0124 12:15:22.652626 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d101ff7c-9560-44ae-a339-4a5dc1053aeb-varrun\") pod \"csi-node-driver-vfhsw\" (UID: \"d101ff7c-9560-44ae-a339-4a5dc1053aeb\") " pod="calico-system/csi-node-driver-vfhsw" Jan 24 12:15:22.653649 kubelet[2795]: E0124 12:15:22.653540 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.653649 kubelet[2795]: W0124 12:15:22.653553 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.653649 kubelet[2795]: E0124 12:15:22.653568 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.653649 kubelet[2795]: I0124 12:15:22.653582 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d101ff7c-9560-44ae-a339-4a5dc1053aeb-kubelet-dir\") pod \"csi-node-driver-vfhsw\" (UID: \"d101ff7c-9560-44ae-a339-4a5dc1053aeb\") " pod="calico-system/csi-node-driver-vfhsw" Jan 24 12:15:22.655357 kubelet[2795]: E0124 12:15:22.654045 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.655357 kubelet[2795]: W0124 12:15:22.654203 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.655357 kubelet[2795]: E0124 12:15:22.654830 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.655357 kubelet[2795]: W0124 12:15:22.654840 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.655357 kubelet[2795]: E0124 12:15:22.654858 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.656357 kubelet[2795]: E0124 12:15:22.656253 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.656357 kubelet[2795]: W0124 12:15:22.656270 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.656357 kubelet[2795]: E0124 12:15:22.656285 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.656999 kubelet[2795]: E0124 12:15:22.656628 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.656999 kubelet[2795]: I0124 12:15:22.656661 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d101ff7c-9560-44ae-a339-4a5dc1053aeb-socket-dir\") pod \"csi-node-driver-vfhsw\" (UID: \"d101ff7c-9560-44ae-a339-4a5dc1053aeb\") " pod="calico-system/csi-node-driver-vfhsw" Jan 24 12:15:22.657995 kubelet[2795]: E0124 12:15:22.657864 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.657995 kubelet[2795]: W0124 12:15:22.657884 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.657995 kubelet[2795]: E0124 12:15:22.657899 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.659068 kubelet[2795]: E0124 12:15:22.658845 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.659068 kubelet[2795]: W0124 12:15:22.658865 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.659068 kubelet[2795]: E0124 12:15:22.659026 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.659707 kubelet[2795]: E0124 12:15:22.659402 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.659707 kubelet[2795]: W0124 12:15:22.659495 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.659707 kubelet[2795]: E0124 12:15:22.659596 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.659707 kubelet[2795]: I0124 12:15:22.659632 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brds4\" (UniqueName: \"kubernetes.io/projected/d101ff7c-9560-44ae-a339-4a5dc1053aeb-kube-api-access-brds4\") pod \"csi-node-driver-vfhsw\" (UID: \"d101ff7c-9560-44ae-a339-4a5dc1053aeb\") " pod="calico-system/csi-node-driver-vfhsw" Jan 24 12:15:22.660257 kubelet[2795]: E0124 12:15:22.660196 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.660257 kubelet[2795]: W0124 12:15:22.660211 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.660350 kubelet[2795]: E0124 12:15:22.660285 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.661568 kubelet[2795]: E0124 12:15:22.661389 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.661690 kubelet[2795]: W0124 12:15:22.661588 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.661690 kubelet[2795]: E0124 12:15:22.661604 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.663205 kubelet[2795]: E0124 12:15:22.662752 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.663205 kubelet[2795]: W0124 12:15:22.662763 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.663205 kubelet[2795]: E0124 12:15:22.662832 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.663310 kubelet[2795]: I0124 12:15:22.663244 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d101ff7c-9560-44ae-a339-4a5dc1053aeb-registration-dir\") pod \"csi-node-driver-vfhsw\" (UID: \"d101ff7c-9560-44ae-a339-4a5dc1053aeb\") " pod="calico-system/csi-node-driver-vfhsw" Jan 24 12:15:22.664049 kubelet[2795]: E0124 12:15:22.663967 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.664774 kubelet[2795]: W0124 12:15:22.664240 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.664774 kubelet[2795]: E0124 12:15:22.664265 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.665609 kubelet[2795]: E0124 12:15:22.665345 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.665609 kubelet[2795]: W0124 12:15:22.665358 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.665609 kubelet[2795]: E0124 12:15:22.665425 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.666507 kubelet[2795]: E0124 12:15:22.666229 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.666695 kubelet[2795]: W0124 12:15:22.666556 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.667203 kubelet[2795]: E0124 12:15:22.666756 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.668298 kubelet[2795]: E0124 12:15:22.668208 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.668298 kubelet[2795]: W0124 12:15:22.668265 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.668298 kubelet[2795]: E0124 12:15:22.668277 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.680000 audit: BPF prog-id=151 op=LOAD Jan 24 12:15:22.685777 kernel: kauditd_printk_skb: 31 callbacks suppressed Jan 24 12:15:22.685844 kernel: audit: type=1334 audit(1769256922.680:523): prog-id=151 op=LOAD Jan 24 12:15:22.697822 kernel: audit: type=1334 audit(1769256922.682:524): prog-id=152 op=LOAD Jan 24 12:15:22.682000 audit: BPF prog-id=152 op=LOAD Jan 24 12:15:22.729781 kernel: audit: type=1300 audit(1769256922.682:524): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3238 pid=3254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:22.682000 audit[3254]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3238 pid=3254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:22.730223 containerd[1599]: time="2026-01-24T12:15:22.710213647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l6gtl,Uid:35eae9ca-3ee1-4fe5-960b-84f302455698,Namespace:calico-system,Attempt:0,}" Jan 24 12:15:22.730298 kubelet[2795]: E0124 12:15:22.703871 2795 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:15:22.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135346539386434646230373933666562333733353662616237313735 Jan 24 12:15:22.756192 kernel: audit: type=1327 audit(1769256922.682:524): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135346539386434646230373933666562333733353662616237313735 Jan 24 12:15:22.682000 audit: BPF prog-id=152 op=UNLOAD Jan 24 12:15:22.682000 audit[3254]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3238 pid=3254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:22.776504 containerd[1599]: time="2026-01-24T12:15:22.775716378Z" level=info msg="connecting to shim 8be0c2dda12c362a4c376e81685f7daa86b577b4f777c1c01bc19c7b23dc5722" address="unix:///run/containerd/s/c0b12b4ae46ca30503fe47ca177b7978cf88e43763a548ee17838bf12503ea93" namespace=k8s.io protocol=ttrpc version=3 Jan 24 12:15:22.779387 kernel: audit: type=1334 audit(1769256922.682:525): prog-id=152 op=UNLOAD Jan 24 12:15:22.779521 kernel: audit: type=1300 audit(1769256922.682:525): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3238 pid=3254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:22.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135346539386434646230373933666562333733353662616237313735 Jan 24 12:15:22.786287 kubelet[2795]: E0124 12:15:22.782049 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.786287 kubelet[2795]: W0124 12:15:22.782071 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.786287 kubelet[2795]: E0124 12:15:22.782646 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.786287 kubelet[2795]: E0124 12:15:22.784397 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.786287 kubelet[2795]: W0124 12:15:22.784407 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.786287 kubelet[2795]: E0124 12:15:22.784422 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.786287 kubelet[2795]: E0124 12:15:22.784964 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.786287 kubelet[2795]: W0124 12:15:22.784983 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.786287 kubelet[2795]: E0124 12:15:22.785058 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.786287 kubelet[2795]: E0124 12:15:22.785384 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.786653 kubelet[2795]: W0124 12:15:22.785392 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.786653 kubelet[2795]: E0124 12:15:22.785693 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.787015 kubelet[2795]: E0124 12:15:22.786757 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.787015 kubelet[2795]: W0124 12:15:22.786774 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.787554 kubelet[2795]: E0124 12:15:22.787309 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.787554 kubelet[2795]: W0124 12:15:22.787320 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.787815 kubelet[2795]: E0124 12:15:22.787708 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.788023 kubelet[2795]: E0124 12:15:22.787955 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.788328 kubelet[2795]: E0124 12:15:22.788241 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.788328 kubelet[2795]: W0124 12:15:22.788252 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.788328 kubelet[2795]: E0124 12:15:22.788262 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.788784 kubelet[2795]: E0124 12:15:22.788755 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.788784 kubelet[2795]: W0124 12:15:22.788768 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.789278 kubelet[2795]: E0124 12:15:22.789247 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.789383 kubelet[2795]: E0124 12:15:22.789373 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.789535 kubelet[2795]: W0124 12:15:22.789422 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.789669 kubelet[2795]: E0124 12:15:22.789657 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.790701 kubelet[2795]: E0124 12:15:22.790589 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.791036 kubelet[2795]: W0124 12:15:22.791014 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.791402 kubelet[2795]: E0124 12:15:22.791189 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.791704 kubelet[2795]: E0124 12:15:22.791628 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.791704 kubelet[2795]: W0124 12:15:22.791639 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.792218 kubelet[2795]: E0124 12:15:22.792019 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.792347 kubelet[2795]: E0124 12:15:22.792337 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.792396 kubelet[2795]: W0124 12:15:22.792386 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.793020 kubelet[2795]: E0124 12:15:22.792992 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.793899 kubelet[2795]: E0124 12:15:22.793728 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.794232 kubelet[2795]: W0124 12:15:22.794031 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.794668 kubelet[2795]: E0124 12:15:22.794599 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.795198 kubelet[2795]: E0124 12:15:22.795185 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.795370 kubelet[2795]: W0124 12:15:22.795357 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.795818 kubelet[2795]: E0124 12:15:22.795640 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.796228 kubelet[2795]: E0124 12:15:22.796214 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.796292 kubelet[2795]: W0124 12:15:22.796281 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.796629 kubelet[2795]: E0124 12:15:22.796419 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.797814 kubelet[2795]: E0124 12:15:22.797778 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.797988 kubelet[2795]: W0124 12:15:22.797947 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.798075 kubelet[2795]: E0124 12:15:22.798058 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.798233 kernel: audit: type=1327 audit(1769256922.682:525): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135346539386434646230373933666562333733353662616237313735 Jan 24 12:15:22.682000 audit: BPF prog-id=153 op=LOAD Jan 24 12:15:22.801327 kubelet[2795]: E0124 12:15:22.798693 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.801327 kubelet[2795]: W0124 12:15:22.798702 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.801327 kubelet[2795]: E0124 12:15:22.798755 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.801327 kubelet[2795]: E0124 12:15:22.798919 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.801327 kubelet[2795]: W0124 12:15:22.798926 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.801327 kubelet[2795]: E0124 12:15:22.799185 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.801327 kubelet[2795]: E0124 12:15:22.799375 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.801327 kubelet[2795]: W0124 12:15:22.799382 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.801327 kubelet[2795]: E0124 12:15:22.799564 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.801327 kubelet[2795]: E0124 12:15:22.799746 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.801698 kubelet[2795]: W0124 12:15:22.799753 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.801698 kubelet[2795]: E0124 12:15:22.799889 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.801698 kubelet[2795]: E0124 12:15:22.800542 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.801698 kubelet[2795]: W0124 12:15:22.800551 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.801698 kubelet[2795]: E0124 12:15:22.800563 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.801698 kubelet[2795]: E0124 12:15:22.801253 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.801698 kubelet[2795]: W0124 12:15:22.801266 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.801698 kubelet[2795]: E0124 12:15:22.801427 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.682000 audit[3254]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3238 pid=3254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:22.804341 kubelet[2795]: E0124 12:15:22.803491 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.804341 kubelet[2795]: W0124 12:15:22.803501 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.804341 kubelet[2795]: E0124 12:15:22.803568 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.804341 kubelet[2795]: E0124 12:15:22.803754 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.804341 kubelet[2795]: W0124 12:15:22.803762 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.804341 kubelet[2795]: E0124 12:15:22.803963 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.804729 kubelet[2795]: E0124 12:15:22.804633 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.804729 kubelet[2795]: W0124 12:15:22.804685 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.804729 kubelet[2795]: E0124 12:15:22.804695 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.820267 kernel: audit: type=1334 audit(1769256922.682:526): prog-id=153 op=LOAD Jan 24 12:15:22.820335 kernel: audit: type=1300 audit(1769256922.682:526): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3238 pid=3254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:22.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135346539386434646230373933666562333733353662616237313735 Jan 24 12:15:22.830717 kubelet[2795]: E0124 12:15:22.822433 2795 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:15:22.830772 containerd[1599]: time="2026-01-24T12:15:22.820810378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8b64d44d5-z7nzz,Uid:67c4694e-31cf-47db-b27d-6f66679e2b3c,Namespace:calico-system,Attempt:0,} returns sandbox id \"154e98d4db0793feb37356bab71755f2f67b08d7eb3d56523e7f066dda895da1\"" Jan 24 12:15:22.830772 containerd[1599]: time="2026-01-24T12:15:22.824195732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 24 12:15:22.837492 kernel: audit: type=1327 audit(1769256922.682:526): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135346539386434646230373933666562333733353662616237313735 Jan 24 12:15:22.682000 audit: BPF prog-id=154 op=LOAD Jan 24 12:15:22.682000 audit[3254]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3238 pid=3254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:22.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135346539386434646230373933666562333733353662616237313735 Jan 24 12:15:22.682000 audit: BPF prog-id=154 op=UNLOAD Jan 24 12:15:22.682000 audit[3254]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3238 pid=3254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:22.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135346539386434646230373933666562333733353662616237313735 Jan 24 12:15:22.682000 audit: BPF prog-id=153 op=UNLOAD Jan 24 12:15:22.682000 audit[3254]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3238 pid=3254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:22.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135346539386434646230373933666562333733353662616237313735 Jan 24 12:15:22.682000 audit: BPF prog-id=155 op=LOAD Jan 24 12:15:22.682000 audit[3254]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3238 pid=3254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:22.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135346539386434646230373933666562333733353662616237313735 Jan 24 12:15:22.842848 systemd[1]: Started cri-containerd-8be0c2dda12c362a4c376e81685f7daa86b577b4f777c1c01bc19c7b23dc5722.scope - libcontainer container 8be0c2dda12c362a4c376e81685f7daa86b577b4f777c1c01bc19c7b23dc5722. Jan 24 12:15:22.857712 kubelet[2795]: E0124 12:15:22.857534 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:22.857712 kubelet[2795]: W0124 12:15:22.857564 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:22.857712 kubelet[2795]: E0124 12:15:22.857587 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:22.881000 audit: BPF prog-id=156 op=LOAD Jan 24 12:15:22.882000 audit: BPF prog-id=157 op=LOAD Jan 24 12:15:22.882000 audit[3339]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3326 pid=3339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:22.882000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862653063326464613132633336326134633337366538313638356637 Jan 24 12:15:22.882000 audit: BPF prog-id=157 op=UNLOAD Jan 24 12:15:22.882000 audit[3339]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3326 pid=3339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:22.882000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862653063326464613132633336326134633337366538313638356637 Jan 24 12:15:22.882000 audit: BPF prog-id=158 op=LOAD Jan 24 12:15:22.882000 audit[3339]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3326 pid=3339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:22.882000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862653063326464613132633336326134633337366538313638356637 Jan 24 12:15:22.883000 audit: BPF prog-id=159 op=LOAD Jan 24 12:15:22.883000 audit[3339]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3326 pid=3339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:22.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862653063326464613132633336326134633337366538313638356637 Jan 24 12:15:22.884000 audit: BPF prog-id=159 op=UNLOAD Jan 24 12:15:22.884000 audit[3339]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3326 pid=3339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:22.884000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862653063326464613132633336326134633337366538313638356637 Jan 24 12:15:22.884000 audit: BPF prog-id=158 op=UNLOAD Jan 24 12:15:22.884000 audit[3339]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3326 pid=3339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:22.884000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862653063326464613132633336326134633337366538313638356637 Jan 24 12:15:22.884000 audit: BPF prog-id=160 op=LOAD Jan 24 12:15:22.884000 audit[3339]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3326 pid=3339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:22.884000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862653063326464613132633336326134633337366538313638356637 Jan 24 12:15:22.939032 containerd[1599]: time="2026-01-24T12:15:22.938921304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l6gtl,Uid:35eae9ca-3ee1-4fe5-960b-84f302455698,Namespace:calico-system,Attempt:0,} returns sandbox id \"8be0c2dda12c362a4c376e81685f7daa86b577b4f777c1c01bc19c7b23dc5722\"" Jan 24 12:15:22.940323 kubelet[2795]: E0124 12:15:22.940237 2795 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:15:23.391653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2518536824.mount: Deactivated successfully. Jan 24 12:15:24.283012 containerd[1599]: time="2026-01-24T12:15:24.282900656Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 12:15:24.286734 containerd[1599]: time="2026-01-24T12:15:24.286651555Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=0" Jan 24 12:15:24.289053 containerd[1599]: time="2026-01-24T12:15:24.288849154Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 12:15:24.292951 containerd[1599]: time="2026-01-24T12:15:24.292785498Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 12:15:24.293386 containerd[1599]: time="2026-01-24T12:15:24.293328645Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.469101354s" Jan 24 12:15:24.293386 containerd[1599]: time="2026-01-24T12:15:24.293359002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 24 12:15:24.294989 containerd[1599]: time="2026-01-24T12:15:24.294955781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 24 12:15:24.310778 containerd[1599]: time="2026-01-24T12:15:24.310542780Z" level=info msg="CreateContainer within sandbox \"154e98d4db0793feb37356bab71755f2f67b08d7eb3d56523e7f066dda895da1\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 24 12:15:24.320729 kubelet[2795]: E0124 12:15:24.320698 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vfhsw" podUID="d101ff7c-9560-44ae-a339-4a5dc1053aeb" Jan 24 12:15:24.323962 containerd[1599]: time="2026-01-24T12:15:24.323916866Z" level=info msg="Container bcd330ab350f21eb38f5f8d11ce30407ca89ab87e2ac182b373135408c1f1ce5: CDI devices from CRI Config.CDIDevices: []" Jan 24 12:15:24.338996 containerd[1599]: time="2026-01-24T12:15:24.338787799Z" level=info msg="CreateContainer within sandbox \"154e98d4db0793feb37356bab71755f2f67b08d7eb3d56523e7f066dda895da1\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"bcd330ab350f21eb38f5f8d11ce30407ca89ab87e2ac182b373135408c1f1ce5\"" Jan 24 12:15:24.340020 containerd[1599]: time="2026-01-24T12:15:24.339966512Z" level=info msg="StartContainer for \"bcd330ab350f21eb38f5f8d11ce30407ca89ab87e2ac182b373135408c1f1ce5\"" Jan 24 12:15:24.341632 containerd[1599]: time="2026-01-24T12:15:24.341558613Z" level=info msg="connecting to shim bcd330ab350f21eb38f5f8d11ce30407ca89ab87e2ac182b373135408c1f1ce5" address="unix:///run/containerd/s/06d32690ea41d48d445a11a7a943cd6d2c2c8d0a25132334f3e1a436b2064c60" protocol=ttrpc version=3 Jan 24 12:15:24.385605 systemd[1]: Started cri-containerd-bcd330ab350f21eb38f5f8d11ce30407ca89ab87e2ac182b373135408c1f1ce5.scope - libcontainer container bcd330ab350f21eb38f5f8d11ce30407ca89ab87e2ac182b373135408c1f1ce5. Jan 24 12:15:24.410000 audit: BPF prog-id=161 op=LOAD Jan 24 12:15:24.411000 audit: BPF prog-id=162 op=LOAD Jan 24 12:15:24.411000 audit[3405]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3238 pid=3405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:24.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263643333306162333530663231656233386635663864313163653330 Jan 24 12:15:24.411000 audit: BPF prog-id=162 op=UNLOAD Jan 24 12:15:24.411000 audit[3405]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3238 pid=3405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:24.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263643333306162333530663231656233386635663864313163653330 Jan 24 12:15:24.411000 audit: BPF prog-id=163 op=LOAD Jan 24 12:15:24.411000 audit[3405]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3238 pid=3405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:24.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263643333306162333530663231656233386635663864313163653330 Jan 24 12:15:24.412000 audit: BPF prog-id=164 op=LOAD Jan 24 12:15:24.412000 audit[3405]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3238 pid=3405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:24.412000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263643333306162333530663231656233386635663864313163653330 Jan 24 12:15:24.412000 audit: BPF prog-id=164 op=UNLOAD Jan 24 12:15:24.412000 audit[3405]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3238 pid=3405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:24.412000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263643333306162333530663231656233386635663864313163653330 Jan 24 12:15:24.412000 audit: BPF prog-id=163 op=UNLOAD Jan 24 12:15:24.412000 audit[3405]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3238 pid=3405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:24.412000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263643333306162333530663231656233386635663864313163653330 Jan 24 12:15:24.412000 audit: BPF prog-id=165 op=LOAD Jan 24 12:15:24.412000 audit[3405]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3238 pid=3405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:24.412000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263643333306162333530663231656233386635663864313163653330 Jan 24 12:15:24.485571 containerd[1599]: time="2026-01-24T12:15:24.485437923Z" level=info msg="StartContainer for \"bcd330ab350f21eb38f5f8d11ce30407ca89ab87e2ac182b373135408c1f1ce5\" returns successfully" Jan 24 12:15:25.476797 kubelet[2795]: E0124 12:15:25.476760 2795 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:15:25.575868 kubelet[2795]: E0124 12:15:25.575306 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:25.575868 kubelet[2795]: W0124 12:15:25.575334 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:25.575868 kubelet[2795]: E0124 12:15:25.575359 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:25.576628 kubelet[2795]: E0124 12:15:25.576562 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:25.576628 kubelet[2795]: W0124 12:15:25.576611 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:25.576628 kubelet[2795]: E0124 12:15:25.576623 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:25.577007 kubelet[2795]: E0124 12:15:25.576933 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:25.577007 kubelet[2795]: W0124 12:15:25.576996 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:25.577076 kubelet[2795]: E0124 12:15:25.577009 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:25.577946 kubelet[2795]: E0124 12:15:25.577849 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:25.577946 kubelet[2795]: W0124 12:15:25.577913 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:25.577946 kubelet[2795]: E0124 12:15:25.577927 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:25.579007 kubelet[2795]: E0124 12:15:25.578913 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:25.579007 kubelet[2795]: W0124 12:15:25.578979 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:25.579007 kubelet[2795]: E0124 12:15:25.578992 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:25.579562 kubelet[2795]: E0124 12:15:25.579549 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:25.579562 kubelet[2795]: W0124 12:15:25.579557 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:25.579922 kubelet[2795]: E0124 12:15:25.579566 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:25.580053 kubelet[2795]: E0124 12:15:25.579961 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:25.580053 kubelet[2795]: W0124 12:15:25.579971 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:25.580053 kubelet[2795]: E0124 12:15:25.579979 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:25.580660 kubelet[2795]: E0124 12:15:25.580597 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:25.580660 kubelet[2795]: W0124 12:15:25.580608 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:25.580660 kubelet[2795]: E0124 12:15:25.580621 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:25.581569 kubelet[2795]: E0124 12:15:25.581036 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:25.581569 kubelet[2795]: W0124 12:15:25.581051 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:25.581569 kubelet[2795]: E0124 12:15:25.581061 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:25.581844 kubelet[2795]: E0124 12:15:25.581729 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:25.581901 kubelet[2795]: W0124 12:15:25.581887 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:25.581901 kubelet[2795]: E0124 12:15:25.581899 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:25.582718 kubelet[2795]: E0124 12:15:25.582289 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:25.582718 kubelet[2795]: W0124 12:15:25.582352 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:25.582718 kubelet[2795]: E0124 12:15:25.582365 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:25.582847 kubelet[2795]: E0124 12:15:25.582747 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:25.582847 kubelet[2795]: W0124 12:15:25.582756 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:25.582847 kubelet[2795]: E0124 12:15:25.582764 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:25.583282 kubelet[2795]: E0124 12:15:25.582941 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:25.583282 kubelet[2795]: W0124 12:15:25.582948 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:25.583282 kubelet[2795]: E0124 12:15:25.582955 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:25.583282 kubelet[2795]: E0124 12:15:25.583235 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:25.583282 kubelet[2795]: W0124 12:15:25.583244 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:25.583282 kubelet[2795]: E0124 12:15:25.583252 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:25.584680 kubelet[2795]: E0124 12:15:25.584383 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:25.584680 kubelet[2795]: W0124 12:15:25.584438 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:25.584680 kubelet[2795]: E0124 12:15:25.584517 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:25.611427 kubelet[2795]: E0124 12:15:25.611278 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:25.611427 kubelet[2795]: W0124 12:15:25.611291 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:25.611427 kubelet[2795]: E0124 12:15:25.611303 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:25.611647 kubelet[2795]: E0124 12:15:25.611595 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:25.611647 kubelet[2795]: W0124 12:15:25.611605 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:25.611647 kubelet[2795]: E0124 12:15:25.611613 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:25.612234 kubelet[2795]: E0124 12:15:25.611954 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:25.612234 kubelet[2795]: W0124 12:15:25.612002 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:25.612234 kubelet[2795]: E0124 12:15:25.612013 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:25.613519 kubelet[2795]: E0124 12:15:25.613275 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:25.613519 kubelet[2795]: W0124 12:15:25.613286 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:25.613519 kubelet[2795]: E0124 12:15:25.613343 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:25.614040 kubelet[2795]: E0124 12:15:25.613771 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:25.614040 kubelet[2795]: W0124 12:15:25.613781 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:25.614040 kubelet[2795]: E0124 12:15:25.613800 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:25.614295 kubelet[2795]: E0124 12:15:25.614253 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:25.614295 kubelet[2795]: W0124 12:15:25.614262 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:25.614533 kubelet[2795]: E0124 12:15:25.614419 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:25.615038 kubelet[2795]: E0124 12:15:25.614979 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:25.615184 kubelet[2795]: W0124 12:15:25.615042 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:25.615427 kubelet[2795]: E0124 12:15:25.615353 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:25.616443 kubelet[2795]: E0124 12:15:25.615757 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:25.616443 kubelet[2795]: W0124 12:15:25.615820 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:25.616443 kubelet[2795]: E0124 12:15:25.616036 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:25.616443 kubelet[2795]: E0124 12:15:25.616312 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:25.616443 kubelet[2795]: W0124 12:15:25.616322 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:25.616892 kubelet[2795]: E0124 12:15:25.616687 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:25.616933 kubelet[2795]: E0124 12:15:25.616901 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:25.616933 kubelet[2795]: W0124 12:15:25.616912 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:25.616933 kubelet[2795]: E0124 12:15:25.616927 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:25.618024 kubelet[2795]: E0124 12:15:25.617910 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:25.618500 kubelet[2795]: W0124 12:15:25.618425 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:25.618862 kubelet[2795]: E0124 12:15:25.618792 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:25.619652 kubelet[2795]: E0124 12:15:25.619590 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:25.619652 kubelet[2795]: W0124 12:15:25.619631 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:25.619981 kubelet[2795]: E0124 12:15:25.619925 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:25.620280 kubelet[2795]: E0124 12:15:25.620072 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:25.620280 kubelet[2795]: W0124 12:15:25.620192 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:25.620763 kubelet[2795]: E0124 12:15:25.620353 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:25.620763 kubelet[2795]: E0124 12:15:25.620537 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:25.620763 kubelet[2795]: W0124 12:15:25.620545 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:25.620763 kubelet[2795]: E0124 12:15:25.620607 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:25.621076 kubelet[2795]: E0124 12:15:25.620938 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:25.621076 kubelet[2795]: W0124 12:15:25.620948 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:25.621772 kubelet[2795]: E0124 12:15:25.621392 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:25.621772 kubelet[2795]: W0124 12:15:25.621400 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:25.621772 kubelet[2795]: E0124 12:15:25.621408 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:25.621772 kubelet[2795]: E0124 12:15:25.621682 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:25.621772 kubelet[2795]: E0124 12:15:25.621757 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:25.621772 kubelet[2795]: W0124 12:15:25.621767 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:25.622006 kubelet[2795]: E0124 12:15:25.621779 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:25.622384 kubelet[2795]: E0124 12:15:25.622330 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:15:25.622414 kubelet[2795]: W0124 12:15:25.622388 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:15:25.622414 kubelet[2795]: E0124 12:15:25.622401 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:15:25.877094 containerd[1599]: time="2026-01-24T12:15:25.876959715Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 12:15:25.878678 containerd[1599]: time="2026-01-24T12:15:25.878492946Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 24 12:15:25.880134 containerd[1599]: time="2026-01-24T12:15:25.880016518Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 12:15:25.882786 containerd[1599]: time="2026-01-24T12:15:25.882662054Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 12:15:25.883360 containerd[1599]: time="2026-01-24T12:15:25.883283594Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.588296775s" Jan 24 12:15:25.883360 containerd[1599]: time="2026-01-24T12:15:25.883351941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 24 12:15:25.885834 containerd[1599]: time="2026-01-24T12:15:25.885780482Z" level=info msg="CreateContainer within sandbox \"8be0c2dda12c362a4c376e81685f7daa86b577b4f777c1c01bc19c7b23dc5722\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 24 12:15:25.899054 containerd[1599]: time="2026-01-24T12:15:25.898243394Z" level=info msg="Container f0232540122299dc64a03134c9f128359c0d654e3bebeec857fcabb4b0f6d9ec: CDI devices from CRI Config.CDIDevices: []" Jan 24 12:15:25.907804 containerd[1599]: time="2026-01-24T12:15:25.907728030Z" level=info msg="CreateContainer within sandbox \"8be0c2dda12c362a4c376e81685f7daa86b577b4f777c1c01bc19c7b23dc5722\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f0232540122299dc64a03134c9f128359c0d654e3bebeec857fcabb4b0f6d9ec\"" Jan 24 12:15:25.908682 containerd[1599]: time="2026-01-24T12:15:25.908614405Z" level=info msg="StartContainer for \"f0232540122299dc64a03134c9f128359c0d654e3bebeec857fcabb4b0f6d9ec\"" Jan 24 12:15:25.910031 containerd[1599]: time="2026-01-24T12:15:25.909925001Z" level=info msg="connecting to shim f0232540122299dc64a03134c9f128359c0d654e3bebeec857fcabb4b0f6d9ec" address="unix:///run/containerd/s/c0b12b4ae46ca30503fe47ca177b7978cf88e43763a548ee17838bf12503ea93" protocol=ttrpc version=3 Jan 24 12:15:25.949440 systemd[1]: Started cri-containerd-f0232540122299dc64a03134c9f128359c0d654e3bebeec857fcabb4b0f6d9ec.scope - libcontainer container f0232540122299dc64a03134c9f128359c0d654e3bebeec857fcabb4b0f6d9ec. Jan 24 12:15:26.028000 audit: BPF prog-id=166 op=LOAD Jan 24 12:15:26.028000 audit[3482]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=3326 pid=3482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:26.028000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630323332353430313232323939646336346130333133346339663132 Jan 24 12:15:26.028000 audit: BPF prog-id=167 op=LOAD Jan 24 12:15:26.028000 audit[3482]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=3326 pid=3482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:26.028000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630323332353430313232323939646336346130333133346339663132 Jan 24 12:15:26.028000 audit: BPF prog-id=167 op=UNLOAD Jan 24 12:15:26.028000 audit[3482]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3326 pid=3482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:26.028000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630323332353430313232323939646336346130333133346339663132 Jan 24 12:15:26.028000 audit: BPF prog-id=166 op=UNLOAD Jan 24 12:15:26.028000 audit[3482]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3326 pid=3482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:26.028000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630323332353430313232323939646336346130333133346339663132 Jan 24 12:15:26.028000 audit: BPF prog-id=168 op=LOAD Jan 24 12:15:26.028000 audit[3482]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=3326 pid=3482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:26.028000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630323332353430313232323939646336346130333133346339663132 Jan 24 12:15:26.070717 containerd[1599]: time="2026-01-24T12:15:26.070326553Z" level=info msg="StartContainer for \"f0232540122299dc64a03134c9f128359c0d654e3bebeec857fcabb4b0f6d9ec\" returns successfully" Jan 24 12:15:26.087948 systemd[1]: cri-containerd-f0232540122299dc64a03134c9f128359c0d654e3bebeec857fcabb4b0f6d9ec.scope: Deactivated successfully. Jan 24 12:15:26.093000 audit: BPF prog-id=168 op=UNLOAD Jan 24 12:15:26.094602 containerd[1599]: time="2026-01-24T12:15:26.094038735Z" level=info msg="received container exit event container_id:\"f0232540122299dc64a03134c9f128359c0d654e3bebeec857fcabb4b0f6d9ec\" id:\"f0232540122299dc64a03134c9f128359c0d654e3bebeec857fcabb4b0f6d9ec\" pid:3495 exited_at:{seconds:1769256926 nanos:92287187}" Jan 24 12:15:26.143004 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0232540122299dc64a03134c9f128359c0d654e3bebeec857fcabb4b0f6d9ec-rootfs.mount: Deactivated successfully. Jan 24 12:15:26.320790 kubelet[2795]: E0124 12:15:26.320602 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vfhsw" podUID="d101ff7c-9560-44ae-a339-4a5dc1053aeb" Jan 24 12:15:26.482867 kubelet[2795]: I0124 12:15:26.482529 2795 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 12:15:26.483327 kubelet[2795]: E0124 12:15:26.482931 2795 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:15:26.483327 kubelet[2795]: E0124 12:15:26.482936 2795 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:15:26.484223 containerd[1599]: time="2026-01-24T12:15:26.483919697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 24 12:15:26.510876 kubelet[2795]: I0124 12:15:26.510834 2795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8b64d44d5-z7nzz" podStartSLOduration=3.040000082 podStartE2EDuration="4.510819867s" podCreationTimestamp="2026-01-24 12:15:22 +0000 UTC" firstStartedPulling="2026-01-24 12:15:22.823580224 +0000 UTC m=+19.688022537" lastFinishedPulling="2026-01-24 12:15:24.29440001 +0000 UTC m=+21.158842322" observedRunningTime="2026-01-24 12:15:25.505541335 +0000 UTC m=+22.369983666" watchObservedRunningTime="2026-01-24 12:15:26.510819867 +0000 UTC m=+23.375262179" Jan 24 12:15:28.320670 kubelet[2795]: E0124 12:15:28.320536 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vfhsw" podUID="d101ff7c-9560-44ae-a339-4a5dc1053aeb" Jan 24 12:15:28.806328 containerd[1599]: time="2026-01-24T12:15:28.806204228Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 12:15:28.807568 containerd[1599]: time="2026-01-24T12:15:28.807262763Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 24 12:15:28.809042 containerd[1599]: time="2026-01-24T12:15:28.808944250Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 12:15:28.812002 containerd[1599]: time="2026-01-24T12:15:28.811779585Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 12:15:28.812838 containerd[1599]: time="2026-01-24T12:15:28.812728546Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.328728769s" Jan 24 12:15:28.812838 containerd[1599]: time="2026-01-24T12:15:28.812819535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 24 12:15:28.816285 containerd[1599]: time="2026-01-24T12:15:28.816049089Z" level=info msg="CreateContainer within sandbox \"8be0c2dda12c362a4c376e81685f7daa86b577b4f777c1c01bc19c7b23dc5722\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 24 12:15:28.831176 containerd[1599]: time="2026-01-24T12:15:28.829544433Z" level=info msg="Container 55d6317b79947bf4978c13e69f7c12599cd4a51e55dbb053eb12c8101c1efb9b: CDI devices from CRI Config.CDIDevices: []" Jan 24 12:15:28.840451 containerd[1599]: time="2026-01-24T12:15:28.840373248Z" level=info msg="CreateContainer within sandbox \"8be0c2dda12c362a4c376e81685f7daa86b577b4f777c1c01bc19c7b23dc5722\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"55d6317b79947bf4978c13e69f7c12599cd4a51e55dbb053eb12c8101c1efb9b\"" Jan 24 12:15:28.841549 containerd[1599]: time="2026-01-24T12:15:28.841400305Z" level=info msg="StartContainer for \"55d6317b79947bf4978c13e69f7c12599cd4a51e55dbb053eb12c8101c1efb9b\"" Jan 24 12:15:28.842740 containerd[1599]: time="2026-01-24T12:15:28.842692265Z" level=info msg="connecting to shim 55d6317b79947bf4978c13e69f7c12599cd4a51e55dbb053eb12c8101c1efb9b" address="unix:///run/containerd/s/c0b12b4ae46ca30503fe47ca177b7978cf88e43763a548ee17838bf12503ea93" protocol=ttrpc version=3 Jan 24 12:15:28.893559 systemd[1]: Started cri-containerd-55d6317b79947bf4978c13e69f7c12599cd4a51e55dbb053eb12c8101c1efb9b.scope - libcontainer container 55d6317b79947bf4978c13e69f7c12599cd4a51e55dbb053eb12c8101c1efb9b. Jan 24 12:15:28.978000 audit: BPF prog-id=169 op=LOAD Jan 24 12:15:28.984285 kernel: kauditd_printk_skb: 72 callbacks suppressed Jan 24 12:15:28.984417 kernel: audit: type=1334 audit(1769256928.978:553): prog-id=169 op=LOAD Jan 24 12:15:28.978000 audit[3543]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3326 pid=3543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:29.010012 kernel: audit: type=1300 audit(1769256928.978:553): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3326 pid=3543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:29.010385 kernel: audit: type=1327 audit(1769256928.978:553): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535643633313762373939343762663439373863313365363966376331 Jan 24 12:15:28.978000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535643633313762373939343762663439373863313365363966376331 Jan 24 12:15:29.030282 kernel: audit: type=1334 audit(1769256928.978:554): prog-id=170 op=LOAD Jan 24 12:15:28.978000 audit: BPF prog-id=170 op=LOAD Jan 24 12:15:28.978000 audit[3543]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3326 pid=3543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:29.060302 kernel: audit: type=1300 audit(1769256928.978:554): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3326 pid=3543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:29.060412 kernel: audit: type=1327 audit(1769256928.978:554): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535643633313762373939343762663439373863313365363966376331 Jan 24 12:15:28.978000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535643633313762373939343762663439373863313365363966376331 Jan 24 12:15:29.076317 kernel: audit: type=1334 audit(1769256928.978:555): prog-id=170 op=UNLOAD Jan 24 12:15:28.978000 audit: BPF prog-id=170 op=UNLOAD Jan 24 12:15:28.978000 audit[3543]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3326 pid=3543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:29.096662 kernel: audit: type=1300 audit(1769256928.978:555): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3326 pid=3543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:29.098225 kernel: audit: type=1327 audit(1769256928.978:555): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535643633313762373939343762663439373863313365363966376331 Jan 24 12:15:28.978000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535643633313762373939343762663439373863313365363966376331 Jan 24 12:15:29.098648 containerd[1599]: time="2026-01-24T12:15:29.097007606Z" level=info msg="StartContainer for \"55d6317b79947bf4978c13e69f7c12599cd4a51e55dbb053eb12c8101c1efb9b\" returns successfully" Jan 24 12:15:29.112363 kernel: audit: type=1334 audit(1769256928.979:556): prog-id=169 op=UNLOAD Jan 24 12:15:28.979000 audit: BPF prog-id=169 op=UNLOAD Jan 24 12:15:28.979000 audit[3543]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3326 pid=3543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:28.979000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535643633313762373939343762663439373863313365363966376331 Jan 24 12:15:28.979000 audit: BPF prog-id=171 op=LOAD Jan 24 12:15:28.979000 audit[3543]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3326 pid=3543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:28.979000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535643633313762373939343762663439373863313365363966376331 Jan 24 12:15:29.501334 kubelet[2795]: E0124 12:15:29.499877 2795 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:15:29.827453 systemd[1]: cri-containerd-55d6317b79947bf4978c13e69f7c12599cd4a51e55dbb053eb12c8101c1efb9b.scope: Deactivated successfully. Jan 24 12:15:29.828579 systemd[1]: cri-containerd-55d6317b79947bf4978c13e69f7c12599cd4a51e55dbb053eb12c8101c1efb9b.scope: Consumed 873ms CPU time, 176.9M memory peak, 3.7M read from disk, 171.3M written to disk. Jan 24 12:15:29.830453 containerd[1599]: time="2026-01-24T12:15:29.830384220Z" level=info msg="received container exit event container_id:\"55d6317b79947bf4978c13e69f7c12599cd4a51e55dbb053eb12c8101c1efb9b\" id:\"55d6317b79947bf4978c13e69f7c12599cd4a51e55dbb053eb12c8101c1efb9b\" pid:3556 exited_at:{seconds:1769256929 nanos:829972947}" Jan 24 12:15:29.833000 audit: BPF prog-id=171 op=UNLOAD Jan 24 12:15:29.875634 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55d6317b79947bf4978c13e69f7c12599cd4a51e55dbb053eb12c8101c1efb9b-rootfs.mount: Deactivated successfully. Jan 24 12:15:29.899324 kubelet[2795]: I0124 12:15:29.899208 2795 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 24 12:15:29.966790 systemd[1]: Created slice kubepods-besteffort-pod1e987738_bc46_4583_b000_c8f1bfbb02a7.slice - libcontainer container kubepods-besteffort-pod1e987738_bc46_4583_b000_c8f1bfbb02a7.slice. Jan 24 12:15:30.013955 systemd[1]: Created slice kubepods-burstable-pod8adc59a5_58dd_4725_9866_6432e45b7341.slice - libcontainer container kubepods-burstable-pod8adc59a5_58dd_4725_9866_6432e45b7341.slice. Jan 24 12:15:30.034832 systemd[1]: Created slice kubepods-besteffort-pode2057d14_bb68_40a3_9abf_385c845f08ca.slice - libcontainer container kubepods-besteffort-pode2057d14_bb68_40a3_9abf_385c845f08ca.slice. Jan 24 12:15:30.043852 systemd[1]: Created slice kubepods-burstable-pod38d7fa86_6007_4b91_a728_a99e3b4e27e2.slice - libcontainer container kubepods-burstable-pod38d7fa86_6007_4b91_a728_a99e3b4e27e2.slice. Jan 24 12:15:30.050561 kubelet[2795]: I0124 12:15:30.050526 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a12ceae4-0501-4e3c-a330-62b420b78e69-whisker-ca-bundle\") pod \"whisker-74898f99df-6pgdt\" (UID: \"a12ceae4-0501-4e3c-a330-62b420b78e69\") " pod="calico-system/whisker-74898f99df-6pgdt" Jan 24 12:15:30.050756 kubelet[2795]: I0124 12:15:30.050731 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2057d14-bb68-40a3-9abf-385c845f08ca-tigera-ca-bundle\") pod \"calico-kube-controllers-79bb998d4d-94tq7\" (UID: \"e2057d14-bb68-40a3-9abf-385c845f08ca\") " pod="calico-system/calico-kube-controllers-79bb998d4d-94tq7" Jan 24 12:15:30.050890 kubelet[2795]: I0124 12:15:30.050867 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdwl8\" (UniqueName: \"kubernetes.io/projected/51f294f0-54db-45aa-b128-8a4414560ade-kube-api-access-pdwl8\") pod \"goldmane-666569f655-xv8ss\" (UID: \"51f294f0-54db-45aa-b128-8a4414560ade\") " pod="calico-system/goldmane-666569f655-xv8ss" Jan 24 12:15:30.050998 kubelet[2795]: I0124 12:15:30.050980 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/38d7fa86-6007-4b91-a728-a99e3b4e27e2-config-volume\") pod \"coredns-668d6bf9bc-8xp4w\" (UID: \"38d7fa86-6007-4b91-a728-a99e3b4e27e2\") " pod="kube-system/coredns-668d6bf9bc-8xp4w" Jan 24 12:15:30.051279 kubelet[2795]: I0124 12:15:30.051073 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a12ceae4-0501-4e3c-a330-62b420b78e69-whisker-backend-key-pair\") pod \"whisker-74898f99df-6pgdt\" (UID: \"a12ceae4-0501-4e3c-a330-62b420b78e69\") " pod="calico-system/whisker-74898f99df-6pgdt" Jan 24 12:15:30.051374 kubelet[2795]: I0124 12:15:30.051361 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54mww\" (UniqueName: \"kubernetes.io/projected/a12ceae4-0501-4e3c-a330-62b420b78e69-kube-api-access-54mww\") pod \"whisker-74898f99df-6pgdt\" (UID: \"a12ceae4-0501-4e3c-a330-62b420b78e69\") " pod="calico-system/whisker-74898f99df-6pgdt" Jan 24 12:15:30.051441 kubelet[2795]: I0124 12:15:30.051429 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjxqj\" (UniqueName: \"kubernetes.io/projected/8adc59a5-58dd-4725-9866-6432e45b7341-kube-api-access-rjxqj\") pod \"coredns-668d6bf9bc-5rqq9\" (UID: \"8adc59a5-58dd-4725-9866-6432e45b7341\") " pod="kube-system/coredns-668d6bf9bc-5rqq9" Jan 24 12:15:30.051612 kubelet[2795]: I0124 12:15:30.051589 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sfpf\" (UniqueName: \"kubernetes.io/projected/1e987738-bc46-4583-b000-c8f1bfbb02a7-kube-api-access-9sfpf\") pod \"calico-apiserver-8479996d5-hmtzg\" (UID: \"1e987738-bc46-4583-b000-c8f1bfbb02a7\") " pod="calico-apiserver/calico-apiserver-8479996d5-hmtzg" Jan 24 12:15:30.051722 kubelet[2795]: I0124 12:15:30.051699 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg6sz\" (UniqueName: \"kubernetes.io/projected/e2057d14-bb68-40a3-9abf-385c845f08ca-kube-api-access-pg6sz\") pod \"calico-kube-controllers-79bb998d4d-94tq7\" (UID: \"e2057d14-bb68-40a3-9abf-385c845f08ca\") " pod="calico-system/calico-kube-controllers-79bb998d4d-94tq7" Jan 24 12:15:30.051826 kubelet[2795]: I0124 12:15:30.051808 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f06e4a34-950d-4dc0-91e1-512b91c976bf-calico-apiserver-certs\") pod \"calico-apiserver-8479996d5-zttk4\" (UID: \"f06e4a34-950d-4dc0-91e1-512b91c976bf\") " pod="calico-apiserver/calico-apiserver-8479996d5-zttk4" Jan 24 12:15:30.051940 kubelet[2795]: I0124 12:15:30.051919 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/51f294f0-54db-45aa-b128-8a4414560ade-goldmane-key-pair\") pod \"goldmane-666569f655-xv8ss\" (UID: \"51f294f0-54db-45aa-b128-8a4414560ade\") " pod="calico-system/goldmane-666569f655-xv8ss" Jan 24 12:15:30.052059 kubelet[2795]: I0124 12:15:30.052042 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxhfp\" (UniqueName: \"kubernetes.io/projected/38d7fa86-6007-4b91-a728-a99e3b4e27e2-kube-api-access-kxhfp\") pod \"coredns-668d6bf9bc-8xp4w\" (UID: \"38d7fa86-6007-4b91-a728-a99e3b4e27e2\") " pod="kube-system/coredns-668d6bf9bc-8xp4w" Jan 24 12:15:30.052287 kubelet[2795]: I0124 12:15:30.052266 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1e987738-bc46-4583-b000-c8f1bfbb02a7-calico-apiserver-certs\") pod \"calico-apiserver-8479996d5-hmtzg\" (UID: \"1e987738-bc46-4583-b000-c8f1bfbb02a7\") " pod="calico-apiserver/calico-apiserver-8479996d5-hmtzg" Jan 24 12:15:30.052579 kubelet[2795]: I0124 12:15:30.052559 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51f294f0-54db-45aa-b128-8a4414560ade-config\") pod \"goldmane-666569f655-xv8ss\" (UID: \"51f294f0-54db-45aa-b128-8a4414560ade\") " pod="calico-system/goldmane-666569f655-xv8ss" Jan 24 12:15:30.052679 kubelet[2795]: I0124 12:15:30.052662 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51f294f0-54db-45aa-b128-8a4414560ade-goldmane-ca-bundle\") pod \"goldmane-666569f655-xv8ss\" (UID: \"51f294f0-54db-45aa-b128-8a4414560ade\") " pod="calico-system/goldmane-666569f655-xv8ss" Jan 24 12:15:30.052773 kubelet[2795]: I0124 12:15:30.052759 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s4r4\" (UniqueName: \"kubernetes.io/projected/f06e4a34-950d-4dc0-91e1-512b91c976bf-kube-api-access-8s4r4\") pod \"calico-apiserver-8479996d5-zttk4\" (UID: \"f06e4a34-950d-4dc0-91e1-512b91c976bf\") " pod="calico-apiserver/calico-apiserver-8479996d5-zttk4" Jan 24 12:15:30.052829 kubelet[2795]: I0124 12:15:30.052818 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8adc59a5-58dd-4725-9866-6432e45b7341-config-volume\") pod \"coredns-668d6bf9bc-5rqq9\" (UID: \"8adc59a5-58dd-4725-9866-6432e45b7341\") " pod="kube-system/coredns-668d6bf9bc-5rqq9" Jan 24 12:15:30.058222 systemd[1]: Created slice kubepods-besteffort-poda12ceae4_0501_4e3c_a330_62b420b78e69.slice - libcontainer container kubepods-besteffort-poda12ceae4_0501_4e3c_a330_62b420b78e69.slice. Jan 24 12:15:30.067761 systemd[1]: Created slice kubepods-besteffort-podf06e4a34_950d_4dc0_91e1_512b91c976bf.slice - libcontainer container kubepods-besteffort-podf06e4a34_950d_4dc0_91e1_512b91c976bf.slice. Jan 24 12:15:30.087974 systemd[1]: Created slice kubepods-besteffort-pod51f294f0_54db_45aa_b128_8a4414560ade.slice - libcontainer container kubepods-besteffort-pod51f294f0_54db_45aa_b128_8a4414560ade.slice. Jan 24 12:15:30.280375 containerd[1599]: time="2026-01-24T12:15:30.280320751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8479996d5-hmtzg,Uid:1e987738-bc46-4583-b000-c8f1bfbb02a7,Namespace:calico-apiserver,Attempt:0,}" Jan 24 12:15:30.325336 kubelet[2795]: E0124 12:15:30.325216 2795 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:15:30.325863 containerd[1599]: time="2026-01-24T12:15:30.325805209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5rqq9,Uid:8adc59a5-58dd-4725-9866-6432e45b7341,Namespace:kube-system,Attempt:0,}" Jan 24 12:15:30.334408 systemd[1]: Created slice kubepods-besteffort-podd101ff7c_9560_44ae_a339_4a5dc1053aeb.slice - libcontainer container kubepods-besteffort-podd101ff7c_9560_44ae_a339_4a5dc1053aeb.slice. Jan 24 12:15:30.339538 containerd[1599]: time="2026-01-24T12:15:30.339309365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vfhsw,Uid:d101ff7c-9560-44ae-a339-4a5dc1053aeb,Namespace:calico-system,Attempt:0,}" Jan 24 12:15:30.340950 containerd[1599]: time="2026-01-24T12:15:30.340255881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79bb998d4d-94tq7,Uid:e2057d14-bb68-40a3-9abf-385c845f08ca,Namespace:calico-system,Attempt:0,}" Jan 24 12:15:30.352229 kubelet[2795]: E0124 12:15:30.351701 2795 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:15:30.353032 containerd[1599]: time="2026-01-24T12:15:30.353005713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8xp4w,Uid:38d7fa86-6007-4b91-a728-a99e3b4e27e2,Namespace:kube-system,Attempt:0,}" Jan 24 12:15:30.365352 containerd[1599]: time="2026-01-24T12:15:30.364894929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74898f99df-6pgdt,Uid:a12ceae4-0501-4e3c-a330-62b420b78e69,Namespace:calico-system,Attempt:0,}" Jan 24 12:15:30.378272 containerd[1599]: time="2026-01-24T12:15:30.377961732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8479996d5-zttk4,Uid:f06e4a34-950d-4dc0-91e1-512b91c976bf,Namespace:calico-apiserver,Attempt:0,}" Jan 24 12:15:30.395693 containerd[1599]: time="2026-01-24T12:15:30.395601757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-xv8ss,Uid:51f294f0-54db-45aa-b128-8a4414560ade,Namespace:calico-system,Attempt:0,}" Jan 24 12:15:30.509439 kubelet[2795]: E0124 12:15:30.509330 2795 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:15:30.511804 containerd[1599]: time="2026-01-24T12:15:30.511745019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 24 12:15:30.535718 containerd[1599]: time="2026-01-24T12:15:30.535607655Z" level=error msg="Failed to destroy network for sandbox \"30e1207550ac795329895a825d1e6be3d89d8444c2d1c47cff0fc250623d997c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:15:30.548185 containerd[1599]: time="2026-01-24T12:15:30.547945405Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8479996d5-hmtzg,Uid:1e987738-bc46-4583-b000-c8f1bfbb02a7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"30e1207550ac795329895a825d1e6be3d89d8444c2d1c47cff0fc250623d997c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:15:30.549869 kubelet[2795]: E0124 12:15:30.549318 2795 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30e1207550ac795329895a825d1e6be3d89d8444c2d1c47cff0fc250623d997c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:15:30.549869 kubelet[2795]: E0124 12:15:30.549394 2795 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30e1207550ac795329895a825d1e6be3d89d8444c2d1c47cff0fc250623d997c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8479996d5-hmtzg" Jan 24 12:15:30.549869 kubelet[2795]: E0124 12:15:30.549417 2795 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30e1207550ac795329895a825d1e6be3d89d8444c2d1c47cff0fc250623d997c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8479996d5-hmtzg" Jan 24 12:15:30.549977 kubelet[2795]: E0124 12:15:30.549837 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8479996d5-hmtzg_calico-apiserver(1e987738-bc46-4583-b000-c8f1bfbb02a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8479996d5-hmtzg_calico-apiserver(1e987738-bc46-4583-b000-c8f1bfbb02a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"30e1207550ac795329895a825d1e6be3d89d8444c2d1c47cff0fc250623d997c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8479996d5-hmtzg" podUID="1e987738-bc46-4583-b000-c8f1bfbb02a7" Jan 24 12:15:30.597003 containerd[1599]: time="2026-01-24T12:15:30.596039892Z" level=error msg="Failed to destroy network for sandbox \"bb7c9ba7d928a315cc1baccce8628b47a9e5650910ad8faa89dcd5bf71847c5f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:15:30.611559 containerd[1599]: time="2026-01-24T12:15:30.611387351Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5rqq9,Uid:8adc59a5-58dd-4725-9866-6432e45b7341,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb7c9ba7d928a315cc1baccce8628b47a9e5650910ad8faa89dcd5bf71847c5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:15:30.612005 containerd[1599]: time="2026-01-24T12:15:30.611418093Z" level=error msg="Failed to destroy network for sandbox \"b446a00ad2f20d96f5079624d929edd2d65bfd7b87587f47fa58410696d9bc26\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:15:30.612769 kubelet[2795]: E0124 12:15:30.612627 2795 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb7c9ba7d928a315cc1baccce8628b47a9e5650910ad8faa89dcd5bf71847c5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:15:30.612769 kubelet[2795]: E0124 12:15:30.612750 2795 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb7c9ba7d928a315cc1baccce8628b47a9e5650910ad8faa89dcd5bf71847c5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-5rqq9" Jan 24 12:15:30.612860 kubelet[2795]: E0124 12:15:30.612771 2795 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb7c9ba7d928a315cc1baccce8628b47a9e5650910ad8faa89dcd5bf71847c5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-5rqq9" Jan 24 12:15:30.612860 kubelet[2795]: E0124 12:15:30.612808 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-5rqq9_kube-system(8adc59a5-58dd-4725-9866-6432e45b7341)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-5rqq9_kube-system(8adc59a5-58dd-4725-9866-6432e45b7341)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bb7c9ba7d928a315cc1baccce8628b47a9e5650910ad8faa89dcd5bf71847c5f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-5rqq9" podUID="8adc59a5-58dd-4725-9866-6432e45b7341" Jan 24 12:15:30.617878 containerd[1599]: time="2026-01-24T12:15:30.617601356Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vfhsw,Uid:d101ff7c-9560-44ae-a339-4a5dc1053aeb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b446a00ad2f20d96f5079624d929edd2d65bfd7b87587f47fa58410696d9bc26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:15:30.618041 kubelet[2795]: E0124 12:15:30.617770 2795 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b446a00ad2f20d96f5079624d929edd2d65bfd7b87587f47fa58410696d9bc26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:15:30.618041 kubelet[2795]: E0124 12:15:30.617922 2795 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b446a00ad2f20d96f5079624d929edd2d65bfd7b87587f47fa58410696d9bc26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vfhsw" Jan 24 12:15:30.618041 kubelet[2795]: E0124 12:15:30.617946 2795 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b446a00ad2f20d96f5079624d929edd2d65bfd7b87587f47fa58410696d9bc26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vfhsw" Jan 24 12:15:30.618309 kubelet[2795]: E0124 12:15:30.617987 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vfhsw_calico-system(d101ff7c-9560-44ae-a339-4a5dc1053aeb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vfhsw_calico-system(d101ff7c-9560-44ae-a339-4a5dc1053aeb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b446a00ad2f20d96f5079624d929edd2d65bfd7b87587f47fa58410696d9bc26\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vfhsw" podUID="d101ff7c-9560-44ae-a339-4a5dc1053aeb" Jan 24 12:15:30.639749 containerd[1599]: time="2026-01-24T12:15:30.638847544Z" level=error msg="Failed to destroy network for sandbox \"6ac8cf25dc63114840d6a8cdb7a3f28b2663f760d63e495477e20c582aad100a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:15:30.644921 containerd[1599]: time="2026-01-24T12:15:30.644254523Z" level=error msg="Failed to destroy network for sandbox \"3c8a1c9d3a32411b6b7c63a1ccca2f5ff4408197dc83f98796da117a1ddc1ed6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:15:30.645514 containerd[1599]: time="2026-01-24T12:15:30.645445485Z" level=error msg="Failed to destroy network for sandbox \"3f6b6d7a14c2c7b03e005d98b294d70091a125962942a4ad99c116d570d38cac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:15:30.663742 containerd[1599]: time="2026-01-24T12:15:30.663211959Z" level=error msg="Failed to destroy network for sandbox \"3ee16ff07be86ffb557b51821c379f879bbdcfbf9f3d587c81f948265570ab7a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:15:30.676585 containerd[1599]: time="2026-01-24T12:15:30.676531081Z" level=error msg="Failed to destroy network for sandbox \"100c705c5d9dc81dd1001e5090358b704d6dd55fb2399a25b57e09c7cb0b934c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:15:30.692540 containerd[1599]: time="2026-01-24T12:15:30.692379187Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8xp4w,Uid:38d7fa86-6007-4b91-a728-a99e3b4e27e2,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ee16ff07be86ffb557b51821c379f879bbdcfbf9f3d587c81f948265570ab7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:15:30.693202 kubelet[2795]: E0124 12:15:30.692958 2795 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ee16ff07be86ffb557b51821c379f879bbdcfbf9f3d587c81f948265570ab7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:15:30.693273 kubelet[2795]: E0124 12:15:30.693204 2795 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ee16ff07be86ffb557b51821c379f879bbdcfbf9f3d587c81f948265570ab7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8xp4w" Jan 24 12:15:30.693273 kubelet[2795]: E0124 12:15:30.693227 2795 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ee16ff07be86ffb557b51821c379f879bbdcfbf9f3d587c81f948265570ab7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8xp4w" Jan 24 12:15:30.693337 kubelet[2795]: E0124 12:15:30.693314 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-8xp4w_kube-system(38d7fa86-6007-4b91-a728-a99e3b4e27e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-8xp4w_kube-system(38d7fa86-6007-4b91-a728-a99e3b4e27e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ee16ff07be86ffb557b51821c379f879bbdcfbf9f3d587c81f948265570ab7a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-8xp4w" podUID="38d7fa86-6007-4b91-a728-a99e3b4e27e2" Jan 24 12:15:30.694231 containerd[1599]: time="2026-01-24T12:15:30.694050265Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-xv8ss,Uid:51f294f0-54db-45aa-b128-8a4414560ade,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c8a1c9d3a32411b6b7c63a1ccca2f5ff4408197dc83f98796da117a1ddc1ed6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:15:30.694622 kubelet[2795]: E0124 12:15:30.694600 2795 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c8a1c9d3a32411b6b7c63a1ccca2f5ff4408197dc83f98796da117a1ddc1ed6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:15:30.694700 kubelet[2795]: E0124 12:15:30.694685 2795 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c8a1c9d3a32411b6b7c63a1ccca2f5ff4408197dc83f98796da117a1ddc1ed6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-xv8ss" Jan 24 12:15:30.694781 kubelet[2795]: E0124 12:15:30.694756 2795 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c8a1c9d3a32411b6b7c63a1ccca2f5ff4408197dc83f98796da117a1ddc1ed6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-xv8ss" Jan 24 12:15:30.694868 kubelet[2795]: E0124 12:15:30.694848 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-xv8ss_calico-system(51f294f0-54db-45aa-b128-8a4414560ade)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-xv8ss_calico-system(51f294f0-54db-45aa-b128-8a4414560ade)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c8a1c9d3a32411b6b7c63a1ccca2f5ff4408197dc83f98796da117a1ddc1ed6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-xv8ss" podUID="51f294f0-54db-45aa-b128-8a4414560ade" Jan 24 12:15:30.695520 containerd[1599]: time="2026-01-24T12:15:30.695408219Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8479996d5-zttk4,Uid:f06e4a34-950d-4dc0-91e1-512b91c976bf,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ac8cf25dc63114840d6a8cdb7a3f28b2663f760d63e495477e20c582aad100a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:15:30.695880 kubelet[2795]: E0124 12:15:30.695610 2795 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ac8cf25dc63114840d6a8cdb7a3f28b2663f760d63e495477e20c582aad100a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:15:30.695880 kubelet[2795]: E0124 12:15:30.695646 2795 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ac8cf25dc63114840d6a8cdb7a3f28b2663f760d63e495477e20c582aad100a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8479996d5-zttk4" Jan 24 12:15:30.695880 kubelet[2795]: E0124 12:15:30.695670 2795 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ac8cf25dc63114840d6a8cdb7a3f28b2663f760d63e495477e20c582aad100a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8479996d5-zttk4" Jan 24 12:15:30.696016 kubelet[2795]: E0124 12:15:30.695761 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8479996d5-zttk4_calico-apiserver(f06e4a34-950d-4dc0-91e1-512b91c976bf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8479996d5-zttk4_calico-apiserver(f06e4a34-950d-4dc0-91e1-512b91c976bf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6ac8cf25dc63114840d6a8cdb7a3f28b2663f760d63e495477e20c582aad100a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8479996d5-zttk4" podUID="f06e4a34-950d-4dc0-91e1-512b91c976bf" Jan 24 12:15:30.699320 containerd[1599]: time="2026-01-24T12:15:30.699212758Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74898f99df-6pgdt,Uid:a12ceae4-0501-4e3c-a330-62b420b78e69,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"100c705c5d9dc81dd1001e5090358b704d6dd55fb2399a25b57e09c7cb0b934c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:15:30.699649 kubelet[2795]: E0124 12:15:30.699561 2795 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"100c705c5d9dc81dd1001e5090358b704d6dd55fb2399a25b57e09c7cb0b934c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:15:30.699649 kubelet[2795]: E0124 12:15:30.699637 2795 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"100c705c5d9dc81dd1001e5090358b704d6dd55fb2399a25b57e09c7cb0b934c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-74898f99df-6pgdt" Jan 24 12:15:30.699726 kubelet[2795]: E0124 12:15:30.699653 2795 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"100c705c5d9dc81dd1001e5090358b704d6dd55fb2399a25b57e09c7cb0b934c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-74898f99df-6pgdt" Jan 24 12:15:30.699769 kubelet[2795]: E0124 12:15:30.699734 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-74898f99df-6pgdt_calico-system(a12ceae4-0501-4e3c-a330-62b420b78e69)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-74898f99df-6pgdt_calico-system(a12ceae4-0501-4e3c-a330-62b420b78e69)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"100c705c5d9dc81dd1001e5090358b704d6dd55fb2399a25b57e09c7cb0b934c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-74898f99df-6pgdt" podUID="a12ceae4-0501-4e3c-a330-62b420b78e69" Jan 24 12:15:30.701310 containerd[1599]: time="2026-01-24T12:15:30.701271822Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79bb998d4d-94tq7,Uid:e2057d14-bb68-40a3-9abf-385c845f08ca,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f6b6d7a14c2c7b03e005d98b294d70091a125962942a4ad99c116d570d38cac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:15:30.701842 kubelet[2795]: E0124 12:15:30.701748 2795 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f6b6d7a14c2c7b03e005d98b294d70091a125962942a4ad99c116d570d38cac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:15:30.701842 kubelet[2795]: E0124 12:15:30.701828 2795 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f6b6d7a14c2c7b03e005d98b294d70091a125962942a4ad99c116d570d38cac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79bb998d4d-94tq7" Jan 24 12:15:30.701912 kubelet[2795]: E0124 12:15:30.701849 2795 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f6b6d7a14c2c7b03e005d98b294d70091a125962942a4ad99c116d570d38cac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79bb998d4d-94tq7" Jan 24 12:15:30.701912 kubelet[2795]: E0124 12:15:30.701884 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-79bb998d4d-94tq7_calico-system(e2057d14-bb68-40a3-9abf-385c845f08ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-79bb998d4d-94tq7_calico-system(e2057d14-bb68-40a3-9abf-385c845f08ca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3f6b6d7a14c2c7b03e005d98b294d70091a125962942a4ad99c116d570d38cac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79bb998d4d-94tq7" podUID="e2057d14-bb68-40a3-9abf-385c845f08ca" Jan 24 12:15:39.341453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2542646106.mount: Deactivated successfully. Jan 24 12:15:39.565873 containerd[1599]: time="2026-01-24T12:15:39.565634799Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 12:15:39.567826 containerd[1599]: time="2026-01-24T12:15:39.567608433Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 24 12:15:39.571056 containerd[1599]: time="2026-01-24T12:15:39.571021944Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 12:15:39.574339 containerd[1599]: time="2026-01-24T12:15:39.574230984Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 12:15:39.574998 containerd[1599]: time="2026-01-24T12:15:39.574898820Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 9.063119907s" Jan 24 12:15:39.574998 containerd[1599]: time="2026-01-24T12:15:39.574967879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 24 12:15:39.601484 containerd[1599]: time="2026-01-24T12:15:39.601373785Z" level=info msg="CreateContainer within sandbox \"8be0c2dda12c362a4c376e81685f7daa86b577b4f777c1c01bc19c7b23dc5722\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 24 12:15:39.622892 containerd[1599]: time="2026-01-24T12:15:39.622761510Z" level=info msg="Container 650b88053b2e7c4d8965ddae004dfe0301a905cba2218a8bc35a00930f5cfbd9: CDI devices from CRI Config.CDIDevices: []" Jan 24 12:15:39.625599 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4180984892.mount: Deactivated successfully. Jan 24 12:15:39.649455 containerd[1599]: time="2026-01-24T12:15:39.649356297Z" level=info msg="CreateContainer within sandbox \"8be0c2dda12c362a4c376e81685f7daa86b577b4f777c1c01bc19c7b23dc5722\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"650b88053b2e7c4d8965ddae004dfe0301a905cba2218a8bc35a00930f5cfbd9\"" Jan 24 12:15:39.650711 containerd[1599]: time="2026-01-24T12:15:39.650479293Z" level=info msg="StartContainer for \"650b88053b2e7c4d8965ddae004dfe0301a905cba2218a8bc35a00930f5cfbd9\"" Jan 24 12:15:39.652069 containerd[1599]: time="2026-01-24T12:15:39.651844662Z" level=info msg="connecting to shim 650b88053b2e7c4d8965ddae004dfe0301a905cba2218a8bc35a00930f5cfbd9" address="unix:///run/containerd/s/c0b12b4ae46ca30503fe47ca177b7978cf88e43763a548ee17838bf12503ea93" protocol=ttrpc version=3 Jan 24 12:15:39.682627 systemd[1]: Started cri-containerd-650b88053b2e7c4d8965ddae004dfe0301a905cba2218a8bc35a00930f5cfbd9.scope - libcontainer container 650b88053b2e7c4d8965ddae004dfe0301a905cba2218a8bc35a00930f5cfbd9. Jan 24 12:15:39.771000 audit: BPF prog-id=172 op=LOAD Jan 24 12:15:39.775702 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 24 12:15:39.775767 kernel: audit: type=1334 audit(1769256939.771:559): prog-id=172 op=LOAD Jan 24 12:15:39.771000 audit[3861]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3326 pid=3861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:39.796755 kernel: audit: type=1300 audit(1769256939.771:559): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3326 pid=3861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:39.771000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635306238383035336232653763346438393635646461653030346466 Jan 24 12:15:39.812689 kernel: audit: type=1327 audit(1769256939.771:559): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635306238383035336232653763346438393635646461653030346466 Jan 24 12:15:39.771000 audit: BPF prog-id=173 op=LOAD Jan 24 12:15:39.817454 kernel: audit: type=1334 audit(1769256939.771:560): prog-id=173 op=LOAD Jan 24 12:15:39.818273 kernel: audit: type=1300 audit(1769256939.771:560): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3326 pid=3861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:39.771000 audit[3861]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3326 pid=3861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:39.771000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635306238383035336232653763346438393635646461653030346466 Jan 24 12:15:39.848189 containerd[1599]: time="2026-01-24T12:15:39.844315245Z" level=info msg="StartContainer for \"650b88053b2e7c4d8965ddae004dfe0301a905cba2218a8bc35a00930f5cfbd9\" returns successfully" Jan 24 12:15:39.851723 kernel: audit: type=1327 audit(1769256939.771:560): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635306238383035336232653763346438393635646461653030346466 Jan 24 12:15:39.771000 audit: BPF prog-id=173 op=UNLOAD Jan 24 12:15:39.856771 kernel: audit: type=1334 audit(1769256939.771:561): prog-id=173 op=UNLOAD Jan 24 12:15:39.856823 kernel: audit: type=1300 audit(1769256939.771:561): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3326 pid=3861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:39.771000 audit[3861]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3326 pid=3861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:39.875191 kernel: audit: type=1327 audit(1769256939.771:561): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635306238383035336232653763346438393635646461653030346466 Jan 24 12:15:39.771000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635306238383035336232653763346438393635646461653030346466 Jan 24 12:15:39.771000 audit: BPF prog-id=172 op=UNLOAD Jan 24 12:15:39.899062 kernel: audit: type=1334 audit(1769256939.771:562): prog-id=172 op=UNLOAD Jan 24 12:15:39.771000 audit[3861]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3326 pid=3861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:39.771000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635306238383035336232653763346438393635646461653030346466 Jan 24 12:15:39.771000 audit: BPF prog-id=174 op=LOAD Jan 24 12:15:39.771000 audit[3861]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3326 pid=3861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:39.771000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635306238383035336232653763346438393635646461653030346466 Jan 24 12:15:39.991460 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 24 12:15:39.991641 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 24 12:15:40.247289 kubelet[2795]: I0124 12:15:40.247013 2795 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a12ceae4-0501-4e3c-a330-62b420b78e69-whisker-backend-key-pair\") pod \"a12ceae4-0501-4e3c-a330-62b420b78e69\" (UID: \"a12ceae4-0501-4e3c-a330-62b420b78e69\") " Jan 24 12:15:40.248642 kubelet[2795]: I0124 12:15:40.247830 2795 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a12ceae4-0501-4e3c-a330-62b420b78e69-whisker-ca-bundle\") pod \"a12ceae4-0501-4e3c-a330-62b420b78e69\" (UID: \"a12ceae4-0501-4e3c-a330-62b420b78e69\") " Jan 24 12:15:40.248642 kubelet[2795]: I0124 12:15:40.247855 2795 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54mww\" (UniqueName: \"kubernetes.io/projected/a12ceae4-0501-4e3c-a330-62b420b78e69-kube-api-access-54mww\") pod \"a12ceae4-0501-4e3c-a330-62b420b78e69\" (UID: \"a12ceae4-0501-4e3c-a330-62b420b78e69\") " Jan 24 12:15:40.249848 kubelet[2795]: I0124 12:15:40.249758 2795 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a12ceae4-0501-4e3c-a330-62b420b78e69-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "a12ceae4-0501-4e3c-a330-62b420b78e69" (UID: "a12ceae4-0501-4e3c-a330-62b420b78e69"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 24 12:15:40.254869 kubelet[2795]: I0124 12:15:40.254765 2795 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a12ceae4-0501-4e3c-a330-62b420b78e69-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "a12ceae4-0501-4e3c-a330-62b420b78e69" (UID: "a12ceae4-0501-4e3c-a330-62b420b78e69"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 24 12:15:40.254869 kubelet[2795]: I0124 12:15:40.254771 2795 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a12ceae4-0501-4e3c-a330-62b420b78e69-kube-api-access-54mww" (OuterVolumeSpecName: "kube-api-access-54mww") pod "a12ceae4-0501-4e3c-a330-62b420b78e69" (UID: "a12ceae4-0501-4e3c-a330-62b420b78e69"). InnerVolumeSpecName "kube-api-access-54mww". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 12:15:40.342725 systemd[1]: var-lib-kubelet-pods-a12ceae4\x2d0501\x2d4e3c\x2da330\x2d62b420b78e69-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d54mww.mount: Deactivated successfully. Jan 24 12:15:40.343233 systemd[1]: var-lib-kubelet-pods-a12ceae4\x2d0501\x2d4e3c\x2da330\x2d62b420b78e69-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 24 12:15:40.348306 kubelet[2795]: I0124 12:15:40.348279 2795 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a12ceae4-0501-4e3c-a330-62b420b78e69-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 24 12:15:40.349419 kubelet[2795]: I0124 12:15:40.349074 2795 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-54mww\" (UniqueName: \"kubernetes.io/projected/a12ceae4-0501-4e3c-a330-62b420b78e69-kube-api-access-54mww\") on node \"localhost\" DevicePath \"\"" Jan 24 12:15:40.349419 kubelet[2795]: I0124 12:15:40.349172 2795 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a12ceae4-0501-4e3c-a330-62b420b78e69-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 24 12:15:40.565951 kubelet[2795]: E0124 12:15:40.565486 2795 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:15:40.576029 systemd[1]: Removed slice kubepods-besteffort-poda12ceae4_0501_4e3c_a330_62b420b78e69.slice - libcontainer container kubepods-besteffort-poda12ceae4_0501_4e3c_a330_62b420b78e69.slice. Jan 24 12:15:40.589761 kubelet[2795]: I0124 12:15:40.589659 2795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-l6gtl" podStartSLOduration=1.953552567 podStartE2EDuration="18.588395377s" podCreationTimestamp="2026-01-24 12:15:22 +0000 UTC" firstStartedPulling="2026-01-24 12:15:22.940825667 +0000 UTC m=+19.805267979" lastFinishedPulling="2026-01-24 12:15:39.575668477 +0000 UTC m=+36.440110789" observedRunningTime="2026-01-24 12:15:40.587213201 +0000 UTC m=+37.451655533" watchObservedRunningTime="2026-01-24 12:15:40.588395377 +0000 UTC m=+37.452837709" Jan 24 12:15:40.687447 systemd[1]: Created slice kubepods-besteffort-pode91bb632_3654_4ec6_9f05_a5289f173ff2.slice - libcontainer container kubepods-besteffort-pode91bb632_3654_4ec6_9f05_a5289f173ff2.slice. Jan 24 12:15:40.752436 kubelet[2795]: I0124 12:15:40.752294 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e91bb632-3654-4ec6-9f05-a5289f173ff2-whisker-ca-bundle\") pod \"whisker-7c54756445-jqv9f\" (UID: \"e91bb632-3654-4ec6-9f05-a5289f173ff2\") " pod="calico-system/whisker-7c54756445-jqv9f" Jan 24 12:15:40.752436 kubelet[2795]: I0124 12:15:40.752366 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e91bb632-3654-4ec6-9f05-a5289f173ff2-whisker-backend-key-pair\") pod \"whisker-7c54756445-jqv9f\" (UID: \"e91bb632-3654-4ec6-9f05-a5289f173ff2\") " pod="calico-system/whisker-7c54756445-jqv9f" Jan 24 12:15:40.752436 kubelet[2795]: I0124 12:15:40.752383 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr58p\" (UniqueName: \"kubernetes.io/projected/e91bb632-3654-4ec6-9f05-a5289f173ff2-kube-api-access-gr58p\") pod \"whisker-7c54756445-jqv9f\" (UID: \"e91bb632-3654-4ec6-9f05-a5289f173ff2\") " pod="calico-system/whisker-7c54756445-jqv9f" Jan 24 12:15:40.994249 containerd[1599]: time="2026-01-24T12:15:40.993375644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c54756445-jqv9f,Uid:e91bb632-3654-4ec6-9f05-a5289f173ff2,Namespace:calico-system,Attempt:0,}" Jan 24 12:15:41.272637 systemd-networkd[1516]: calic8d2d72ecc7: Link UP Jan 24 12:15:41.273771 systemd-networkd[1516]: calic8d2d72ecc7: Gained carrier Jan 24 12:15:41.293798 containerd[1599]: 2026-01-24 12:15:41.030 [INFO][3931] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 12:15:41.293798 containerd[1599]: 2026-01-24 12:15:41.059 [INFO][3931] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7c54756445--jqv9f-eth0 whisker-7c54756445- calico-system e91bb632-3654-4ec6-9f05-a5289f173ff2 939 0 2026-01-24 12:15:40 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7c54756445 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7c54756445-jqv9f eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic8d2d72ecc7 [] [] }} ContainerID="c00c620b29f7c66d4439ba512993d99a52a0cd18cb651fc15835c86883e93e57" Namespace="calico-system" Pod="whisker-7c54756445-jqv9f" WorkloadEndpoint="localhost-k8s-whisker--7c54756445--jqv9f-" Jan 24 12:15:41.293798 containerd[1599]: 2026-01-24 12:15:41.059 [INFO][3931] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c00c620b29f7c66d4439ba512993d99a52a0cd18cb651fc15835c86883e93e57" Namespace="calico-system" Pod="whisker-7c54756445-jqv9f" WorkloadEndpoint="localhost-k8s-whisker--7c54756445--jqv9f-eth0" Jan 24 12:15:41.293798 containerd[1599]: 2026-01-24 12:15:41.170 [INFO][3944] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c00c620b29f7c66d4439ba512993d99a52a0cd18cb651fc15835c86883e93e57" HandleID="k8s-pod-network.c00c620b29f7c66d4439ba512993d99a52a0cd18cb651fc15835c86883e93e57" Workload="localhost-k8s-whisker--7c54756445--jqv9f-eth0" Jan 24 12:15:41.294065 containerd[1599]: 2026-01-24 12:15:41.171 [INFO][3944] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c00c620b29f7c66d4439ba512993d99a52a0cd18cb651fc15835c86883e93e57" HandleID="k8s-pod-network.c00c620b29f7c66d4439ba512993d99a52a0cd18cb651fc15835c86883e93e57" Workload="localhost-k8s-whisker--7c54756445--jqv9f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e330), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7c54756445-jqv9f", "timestamp":"2026-01-24 12:15:41.170289637 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 12:15:41.294065 containerd[1599]: 2026-01-24 12:15:41.171 [INFO][3944] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 12:15:41.294065 containerd[1599]: 2026-01-24 12:15:41.172 [INFO][3944] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 12:15:41.294065 containerd[1599]: 2026-01-24 12:15:41.172 [INFO][3944] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 12:15:41.294065 containerd[1599]: 2026-01-24 12:15:41.184 [INFO][3944] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c00c620b29f7c66d4439ba512993d99a52a0cd18cb651fc15835c86883e93e57" host="localhost" Jan 24 12:15:41.294065 containerd[1599]: 2026-01-24 12:15:41.201 [INFO][3944] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 12:15:41.294065 containerd[1599]: 2026-01-24 12:15:41.213 [INFO][3944] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 12:15:41.294065 containerd[1599]: 2026-01-24 12:15:41.220 [INFO][3944] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 12:15:41.294065 containerd[1599]: 2026-01-24 12:15:41.224 [INFO][3944] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 12:15:41.294065 containerd[1599]: 2026-01-24 12:15:41.225 [INFO][3944] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c00c620b29f7c66d4439ba512993d99a52a0cd18cb651fc15835c86883e93e57" host="localhost" Jan 24 12:15:41.294411 containerd[1599]: 2026-01-24 12:15:41.229 [INFO][3944] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c00c620b29f7c66d4439ba512993d99a52a0cd18cb651fc15835c86883e93e57 Jan 24 12:15:41.294411 containerd[1599]: 2026-01-24 12:15:41.239 [INFO][3944] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c00c620b29f7c66d4439ba512993d99a52a0cd18cb651fc15835c86883e93e57" host="localhost" Jan 24 12:15:41.294411 containerd[1599]: 2026-01-24 12:15:41.251 [INFO][3944] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.c00c620b29f7c66d4439ba512993d99a52a0cd18cb651fc15835c86883e93e57" host="localhost" Jan 24 12:15:41.294411 containerd[1599]: 2026-01-24 12:15:41.251 [INFO][3944] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.c00c620b29f7c66d4439ba512993d99a52a0cd18cb651fc15835c86883e93e57" host="localhost" Jan 24 12:15:41.294411 containerd[1599]: 2026-01-24 12:15:41.251 [INFO][3944] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 12:15:41.294411 containerd[1599]: 2026-01-24 12:15:41.251 [INFO][3944] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="c00c620b29f7c66d4439ba512993d99a52a0cd18cb651fc15835c86883e93e57" HandleID="k8s-pod-network.c00c620b29f7c66d4439ba512993d99a52a0cd18cb651fc15835c86883e93e57" Workload="localhost-k8s-whisker--7c54756445--jqv9f-eth0" Jan 24 12:15:41.294569 containerd[1599]: 2026-01-24 12:15:41.254 [INFO][3931] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c00c620b29f7c66d4439ba512993d99a52a0cd18cb651fc15835c86883e93e57" Namespace="calico-system" Pod="whisker-7c54756445-jqv9f" WorkloadEndpoint="localhost-k8s-whisker--7c54756445--jqv9f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7c54756445--jqv9f-eth0", GenerateName:"whisker-7c54756445-", Namespace:"calico-system", SelfLink:"", UID:"e91bb632-3654-4ec6-9f05-a5289f173ff2", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 12, 15, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7c54756445", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7c54756445-jqv9f", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic8d2d72ecc7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 12:15:41.294569 containerd[1599]: 2026-01-24 12:15:41.258 [INFO][3931] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="c00c620b29f7c66d4439ba512993d99a52a0cd18cb651fc15835c86883e93e57" Namespace="calico-system" Pod="whisker-7c54756445-jqv9f" WorkloadEndpoint="localhost-k8s-whisker--7c54756445--jqv9f-eth0" Jan 24 12:15:41.294710 containerd[1599]: 2026-01-24 12:15:41.258 [INFO][3931] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic8d2d72ecc7 ContainerID="c00c620b29f7c66d4439ba512993d99a52a0cd18cb651fc15835c86883e93e57" Namespace="calico-system" Pod="whisker-7c54756445-jqv9f" WorkloadEndpoint="localhost-k8s-whisker--7c54756445--jqv9f-eth0" Jan 24 12:15:41.294710 containerd[1599]: 2026-01-24 12:15:41.274 [INFO][3931] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c00c620b29f7c66d4439ba512993d99a52a0cd18cb651fc15835c86883e93e57" Namespace="calico-system" Pod="whisker-7c54756445-jqv9f" WorkloadEndpoint="localhost-k8s-whisker--7c54756445--jqv9f-eth0" Jan 24 12:15:41.294752 containerd[1599]: 2026-01-24 12:15:41.274 [INFO][3931] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c00c620b29f7c66d4439ba512993d99a52a0cd18cb651fc15835c86883e93e57" Namespace="calico-system" Pod="whisker-7c54756445-jqv9f" WorkloadEndpoint="localhost-k8s-whisker--7c54756445--jqv9f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7c54756445--jqv9f-eth0", GenerateName:"whisker-7c54756445-", Namespace:"calico-system", SelfLink:"", UID:"e91bb632-3654-4ec6-9f05-a5289f173ff2", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 12, 15, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7c54756445", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c00c620b29f7c66d4439ba512993d99a52a0cd18cb651fc15835c86883e93e57", Pod:"whisker-7c54756445-jqv9f", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic8d2d72ecc7", MAC:"b2:85:9a:07:f2:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 12:15:41.294864 containerd[1599]: 2026-01-24 12:15:41.287 [INFO][3931] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c00c620b29f7c66d4439ba512993d99a52a0cd18cb651fc15835c86883e93e57" Namespace="calico-system" Pod="whisker-7c54756445-jqv9f" WorkloadEndpoint="localhost-k8s-whisker--7c54756445--jqv9f-eth0" Jan 24 12:15:41.321003 kubelet[2795]: E0124 12:15:41.320900 2795 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:15:41.321744 containerd[1599]: time="2026-01-24T12:15:41.321697115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8xp4w,Uid:38d7fa86-6007-4b91-a728-a99e3b4e27e2,Namespace:kube-system,Attempt:0,}" Jan 24 12:15:41.324625 kubelet[2795]: I0124 12:15:41.324368 2795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a12ceae4-0501-4e3c-a330-62b420b78e69" path="/var/lib/kubelet/pods/a12ceae4-0501-4e3c-a330-62b420b78e69/volumes" Jan 24 12:15:41.408790 containerd[1599]: time="2026-01-24T12:15:41.408252800Z" level=info msg="connecting to shim c00c620b29f7c66d4439ba512993d99a52a0cd18cb651fc15835c86883e93e57" address="unix:///run/containerd/s/5e08e36eba9af3687ad56bb479235bb41468a6a6bc0d19dd3eacbd778a4febc8" namespace=k8s.io protocol=ttrpc version=3 Jan 24 12:15:41.498498 systemd[1]: Started cri-containerd-c00c620b29f7c66d4439ba512993d99a52a0cd18cb651fc15835c86883e93e57.scope - libcontainer container c00c620b29f7c66d4439ba512993d99a52a0cd18cb651fc15835c86883e93e57. Jan 24 12:15:41.557000 audit: BPF prog-id=175 op=LOAD Jan 24 12:15:41.558000 audit: BPF prog-id=176 op=LOAD Jan 24 12:15:41.558000 audit[4010]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3982 pid=4010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:41.558000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330306336323062323966376336366434343339626135313239393364 Jan 24 12:15:41.558000 audit: BPF prog-id=176 op=UNLOAD Jan 24 12:15:41.558000 audit[4010]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3982 pid=4010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:41.558000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330306336323062323966376336366434343339626135313239393364 Jan 24 12:15:41.559000 audit: BPF prog-id=177 op=LOAD Jan 24 12:15:41.559000 audit[4010]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3982 pid=4010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:41.559000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330306336323062323966376336366434343339626135313239393364 Jan 24 12:15:41.559000 audit: BPF prog-id=178 op=LOAD Jan 24 12:15:41.559000 audit[4010]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3982 pid=4010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:41.559000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330306336323062323966376336366434343339626135313239393364 Jan 24 12:15:41.559000 audit: BPF prog-id=178 op=UNLOAD Jan 24 12:15:41.559000 audit[4010]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3982 pid=4010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:41.559000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330306336323062323966376336366434343339626135313239393364 Jan 24 12:15:41.559000 audit: BPF prog-id=177 op=UNLOAD Jan 24 12:15:41.559000 audit[4010]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3982 pid=4010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:41.559000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330306336323062323966376336366434343339626135313239393364 Jan 24 12:15:41.559000 audit: BPF prog-id=179 op=LOAD Jan 24 12:15:41.559000 audit[4010]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3982 pid=4010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:41.559000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330306336323062323966376336366434343339626135313239393364 Jan 24 12:15:41.562683 systemd-resolved[1286]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 12:15:41.573865 kubelet[2795]: I0124 12:15:41.573704 2795 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 12:15:41.576448 kubelet[2795]: E0124 12:15:41.576241 2795 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:15:41.635762 systemd-networkd[1516]: cali6b650bd685c: Link UP Jan 24 12:15:41.637756 systemd-networkd[1516]: cali6b650bd685c: Gained carrier Jan 24 12:15:41.694406 containerd[1599]: 2026-01-24 12:15:41.416 [INFO][3960] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 12:15:41.694406 containerd[1599]: 2026-01-24 12:15:41.446 [INFO][3960] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--8xp4w-eth0 coredns-668d6bf9bc- kube-system 38d7fa86-6007-4b91-a728-a99e3b4e27e2 863 0 2026-01-24 12:15:08 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-8xp4w eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6b650bd685c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c4a8c269dfaadd8507aaee2e3ca081668a5295f8f7f2417f29bbea687d0f3e24" Namespace="kube-system" Pod="coredns-668d6bf9bc-8xp4w" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8xp4w-" Jan 24 12:15:41.694406 containerd[1599]: 2026-01-24 12:15:41.446 [INFO][3960] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c4a8c269dfaadd8507aaee2e3ca081668a5295f8f7f2417f29bbea687d0f3e24" Namespace="kube-system" Pod="coredns-668d6bf9bc-8xp4w" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8xp4w-eth0" Jan 24 12:15:41.694406 containerd[1599]: 2026-01-24 12:15:41.531 [INFO][4038] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c4a8c269dfaadd8507aaee2e3ca081668a5295f8f7f2417f29bbea687d0f3e24" HandleID="k8s-pod-network.c4a8c269dfaadd8507aaee2e3ca081668a5295f8f7f2417f29bbea687d0f3e24" Workload="localhost-k8s-coredns--668d6bf9bc--8xp4w-eth0" Jan 24 12:15:41.694806 containerd[1599]: 2026-01-24 12:15:41.534 [INFO][4038] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c4a8c269dfaadd8507aaee2e3ca081668a5295f8f7f2417f29bbea687d0f3e24" HandleID="k8s-pod-network.c4a8c269dfaadd8507aaee2e3ca081668a5295f8f7f2417f29bbea687d0f3e24" Workload="localhost-k8s-coredns--668d6bf9bc--8xp4w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001381f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-8xp4w", "timestamp":"2026-01-24 12:15:41.531943246 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 12:15:41.694806 containerd[1599]: 2026-01-24 12:15:41.534 [INFO][4038] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 12:15:41.694806 containerd[1599]: 2026-01-24 12:15:41.534 [INFO][4038] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 12:15:41.694806 containerd[1599]: 2026-01-24 12:15:41.535 [INFO][4038] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 12:15:41.694806 containerd[1599]: 2026-01-24 12:15:41.545 [INFO][4038] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c4a8c269dfaadd8507aaee2e3ca081668a5295f8f7f2417f29bbea687d0f3e24" host="localhost" Jan 24 12:15:41.694806 containerd[1599]: 2026-01-24 12:15:41.558 [INFO][4038] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 12:15:41.694806 containerd[1599]: 2026-01-24 12:15:41.573 [INFO][4038] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 12:15:41.694806 containerd[1599]: 2026-01-24 12:15:41.585 [INFO][4038] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 12:15:41.694806 containerd[1599]: 2026-01-24 12:15:41.589 [INFO][4038] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 12:15:41.694806 containerd[1599]: 2026-01-24 12:15:41.589 [INFO][4038] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c4a8c269dfaadd8507aaee2e3ca081668a5295f8f7f2417f29bbea687d0f3e24" host="localhost" Jan 24 12:15:41.695067 containerd[1599]: 2026-01-24 12:15:41.593 [INFO][4038] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c4a8c269dfaadd8507aaee2e3ca081668a5295f8f7f2417f29bbea687d0f3e24 Jan 24 12:15:41.695067 containerd[1599]: 2026-01-24 12:15:41.609 [INFO][4038] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c4a8c269dfaadd8507aaee2e3ca081668a5295f8f7f2417f29bbea687d0f3e24" host="localhost" Jan 24 12:15:41.695067 containerd[1599]: 2026-01-24 12:15:41.617 [INFO][4038] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.c4a8c269dfaadd8507aaee2e3ca081668a5295f8f7f2417f29bbea687d0f3e24" host="localhost" Jan 24 12:15:41.695067 containerd[1599]: 2026-01-24 12:15:41.618 [INFO][4038] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.c4a8c269dfaadd8507aaee2e3ca081668a5295f8f7f2417f29bbea687d0f3e24" host="localhost" Jan 24 12:15:41.695067 containerd[1599]: 2026-01-24 12:15:41.618 [INFO][4038] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 12:15:41.695067 containerd[1599]: 2026-01-24 12:15:41.618 [INFO][4038] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="c4a8c269dfaadd8507aaee2e3ca081668a5295f8f7f2417f29bbea687d0f3e24" HandleID="k8s-pod-network.c4a8c269dfaadd8507aaee2e3ca081668a5295f8f7f2417f29bbea687d0f3e24" Workload="localhost-k8s-coredns--668d6bf9bc--8xp4w-eth0" Jan 24 12:15:41.695277 containerd[1599]: 2026-01-24 12:15:41.628 [INFO][3960] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c4a8c269dfaadd8507aaee2e3ca081668a5295f8f7f2417f29bbea687d0f3e24" Namespace="kube-system" Pod="coredns-668d6bf9bc-8xp4w" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8xp4w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--8xp4w-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"38d7fa86-6007-4b91-a728-a99e3b4e27e2", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 12, 15, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-8xp4w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6b650bd685c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 12:15:41.695415 containerd[1599]: 2026-01-24 12:15:41.628 [INFO][3960] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="c4a8c269dfaadd8507aaee2e3ca081668a5295f8f7f2417f29bbea687d0f3e24" Namespace="kube-system" Pod="coredns-668d6bf9bc-8xp4w" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8xp4w-eth0" Jan 24 12:15:41.695415 containerd[1599]: 2026-01-24 12:15:41.629 [INFO][3960] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6b650bd685c ContainerID="c4a8c269dfaadd8507aaee2e3ca081668a5295f8f7f2417f29bbea687d0f3e24" Namespace="kube-system" Pod="coredns-668d6bf9bc-8xp4w" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8xp4w-eth0" Jan 24 12:15:41.695415 containerd[1599]: 2026-01-24 12:15:41.644 [INFO][3960] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c4a8c269dfaadd8507aaee2e3ca081668a5295f8f7f2417f29bbea687d0f3e24" Namespace="kube-system" Pod="coredns-668d6bf9bc-8xp4w" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8xp4w-eth0" Jan 24 12:15:41.695481 containerd[1599]: 2026-01-24 12:15:41.663 [INFO][3960] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c4a8c269dfaadd8507aaee2e3ca081668a5295f8f7f2417f29bbea687d0f3e24" Namespace="kube-system" Pod="coredns-668d6bf9bc-8xp4w" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8xp4w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--8xp4w-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"38d7fa86-6007-4b91-a728-a99e3b4e27e2", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 12, 15, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c4a8c269dfaadd8507aaee2e3ca081668a5295f8f7f2417f29bbea687d0f3e24", Pod:"coredns-668d6bf9bc-8xp4w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6b650bd685c", MAC:"ea:28:e9:ff:9a:d8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 12:15:41.695481 containerd[1599]: 2026-01-24 12:15:41.685 [INFO][3960] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c4a8c269dfaadd8507aaee2e3ca081668a5295f8f7f2417f29bbea687d0f3e24" Namespace="kube-system" Pod="coredns-668d6bf9bc-8xp4w" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8xp4w-eth0" Jan 24 12:15:41.706507 containerd[1599]: time="2026-01-24T12:15:41.706349907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c54756445-jqv9f,Uid:e91bb632-3654-4ec6-9f05-a5289f173ff2,Namespace:calico-system,Attempt:0,} returns sandbox id \"c00c620b29f7c66d4439ba512993d99a52a0cd18cb651fc15835c86883e93e57\"" Jan 24 12:15:41.711440 containerd[1599]: time="2026-01-24T12:15:41.710769036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 12:15:41.791693 containerd[1599]: time="2026-01-24T12:15:41.791572404Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:15:41.793911 containerd[1599]: time="2026-01-24T12:15:41.793738487Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 12:15:41.794858 containerd[1599]: time="2026-01-24T12:15:41.793852410Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 24 12:15:41.795948 kubelet[2795]: E0124 12:15:41.795692 2795 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 12:15:41.795948 kubelet[2795]: E0124 12:15:41.795853 2795 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 12:15:41.802829 kubelet[2795]: E0124 12:15:41.802654 2795 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:470b03a2107d413e8e61da949363329c,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gr58p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7c54756445-jqv9f_calico-system(e91bb632-3654-4ec6-9f05-a5289f173ff2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 12:15:41.806956 containerd[1599]: time="2026-01-24T12:15:41.806876499Z" level=info msg="connecting to shim c4a8c269dfaadd8507aaee2e3ca081668a5295f8f7f2417f29bbea687d0f3e24" address="unix:///run/containerd/s/ae8d418d26845ba6331fdd50e558a0135cb7f6c7532f355419654963a38407db" namespace=k8s.io protocol=ttrpc version=3 Jan 24 12:15:41.807426 containerd[1599]: time="2026-01-24T12:15:41.807360012Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 12:15:41.878307 containerd[1599]: time="2026-01-24T12:15:41.878051446Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:15:41.880590 containerd[1599]: time="2026-01-24T12:15:41.880478116Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 12:15:41.880771 containerd[1599]: time="2026-01-24T12:15:41.880693578Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 24 12:15:41.881726 kubelet[2795]: E0124 12:15:41.881213 2795 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 12:15:41.881726 kubelet[2795]: E0124 12:15:41.881305 2795 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 12:15:41.881822 kubelet[2795]: E0124 12:15:41.881396 2795 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gr58p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7c54756445-jqv9f_calico-system(e91bb632-3654-4ec6-9f05-a5289f173ff2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 12:15:41.882690 kubelet[2795]: E0124 12:15:41.882609 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c54756445-jqv9f" podUID="e91bb632-3654-4ec6-9f05-a5289f173ff2" Jan 24 12:15:41.910353 systemd[1]: Started cri-containerd-c4a8c269dfaadd8507aaee2e3ca081668a5295f8f7f2417f29bbea687d0f3e24.scope - libcontainer container c4a8c269dfaadd8507aaee2e3ca081668a5295f8f7f2417f29bbea687d0f3e24. Jan 24 12:15:41.927000 audit: BPF prog-id=180 op=LOAD Jan 24 12:15:41.928000 audit: BPF prog-id=181 op=LOAD Jan 24 12:15:41.928000 audit[4158]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4146 pid=4158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:41.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334613863323639646661616464383530376161656532653363613038 Jan 24 12:15:41.928000 audit: BPF prog-id=181 op=UNLOAD Jan 24 12:15:41.928000 audit[4158]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4146 pid=4158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:41.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334613863323639646661616464383530376161656532653363613038 Jan 24 12:15:41.928000 audit: BPF prog-id=182 op=LOAD Jan 24 12:15:41.928000 audit[4158]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4146 pid=4158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:41.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334613863323639646661616464383530376161656532653363613038 Jan 24 12:15:41.928000 audit: BPF prog-id=183 op=LOAD Jan 24 12:15:41.928000 audit[4158]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4146 pid=4158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:41.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334613863323639646661616464383530376161656532653363613038 Jan 24 12:15:41.928000 audit: BPF prog-id=183 op=UNLOAD Jan 24 12:15:41.928000 audit[4158]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4146 pid=4158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:41.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334613863323639646661616464383530376161656532653363613038 Jan 24 12:15:41.929000 audit: BPF prog-id=182 op=UNLOAD Jan 24 12:15:41.929000 audit[4158]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4146 pid=4158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:41.929000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334613863323639646661616464383530376161656532653363613038 Jan 24 12:15:41.929000 audit: BPF prog-id=184 op=LOAD Jan 24 12:15:41.929000 audit[4158]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4146 pid=4158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:41.929000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334613863323639646661616464383530376161656532653363613038 Jan 24 12:15:41.931445 systemd-resolved[1286]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 12:15:41.986768 containerd[1599]: time="2026-01-24T12:15:41.986623356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8xp4w,Uid:38d7fa86-6007-4b91-a728-a99e3b4e27e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"c4a8c269dfaadd8507aaee2e3ca081668a5295f8f7f2417f29bbea687d0f3e24\"" Jan 24 12:15:41.988490 kubelet[2795]: E0124 12:15:41.988295 2795 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:15:41.991257 containerd[1599]: time="2026-01-24T12:15:41.990782790Z" level=info msg="CreateContainer within sandbox \"c4a8c269dfaadd8507aaee2e3ca081668a5295f8f7f2417f29bbea687d0f3e24\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 12:15:42.016912 containerd[1599]: time="2026-01-24T12:15:42.016806203Z" level=info msg="Container 69ec8a9fb62a67ea7944001c2066a6801796d110d9735749d6b541617d5260cd: CDI devices from CRI Config.CDIDevices: []" Jan 24 12:15:42.029056 containerd[1599]: time="2026-01-24T12:15:42.028978161Z" level=info msg="CreateContainer within sandbox \"c4a8c269dfaadd8507aaee2e3ca081668a5295f8f7f2417f29bbea687d0f3e24\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"69ec8a9fb62a67ea7944001c2066a6801796d110d9735749d6b541617d5260cd\"" Jan 24 12:15:42.030228 containerd[1599]: time="2026-01-24T12:15:42.029987117Z" level=info msg="StartContainer for \"69ec8a9fb62a67ea7944001c2066a6801796d110d9735749d6b541617d5260cd\"" Jan 24 12:15:42.031783 containerd[1599]: time="2026-01-24T12:15:42.031610497Z" level=info msg="connecting to shim 69ec8a9fb62a67ea7944001c2066a6801796d110d9735749d6b541617d5260cd" address="unix:///run/containerd/s/ae8d418d26845ba6331fdd50e558a0135cb7f6c7532f355419654963a38407db" protocol=ttrpc version=3 Jan 24 12:15:42.070451 systemd[1]: Started cri-containerd-69ec8a9fb62a67ea7944001c2066a6801796d110d9735749d6b541617d5260cd.scope - libcontainer container 69ec8a9fb62a67ea7944001c2066a6801796d110d9735749d6b541617d5260cd. Jan 24 12:15:42.101000 audit: BPF prog-id=185 op=LOAD Jan 24 12:15:42.102000 audit: BPF prog-id=186 op=LOAD Jan 24 12:15:42.102000 audit[4182]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4146 pid=4182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:42.102000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639656338613966623632613637656137393434303031633230363661 Jan 24 12:15:42.103000 audit: BPF prog-id=186 op=UNLOAD Jan 24 12:15:42.103000 audit[4182]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4146 pid=4182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:42.103000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639656338613966623632613637656137393434303031633230363661 Jan 24 12:15:42.103000 audit: BPF prog-id=187 op=LOAD Jan 24 12:15:42.103000 audit[4182]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4146 pid=4182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:42.103000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639656338613966623632613637656137393434303031633230363661 Jan 24 12:15:42.103000 audit: BPF prog-id=188 op=LOAD Jan 24 12:15:42.103000 audit[4182]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4146 pid=4182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:42.103000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639656338613966623632613637656137393434303031633230363661 Jan 24 12:15:42.103000 audit: BPF prog-id=188 op=UNLOAD Jan 24 12:15:42.103000 audit[4182]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4146 pid=4182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:42.103000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639656338613966623632613637656137393434303031633230363661 Jan 24 12:15:42.103000 audit: BPF prog-id=187 op=UNLOAD Jan 24 12:15:42.103000 audit[4182]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4146 pid=4182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:42.103000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639656338613966623632613637656137393434303031633230363661 Jan 24 12:15:42.103000 audit: BPF prog-id=189 op=LOAD Jan 24 12:15:42.103000 audit[4182]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4146 pid=4182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:42.103000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639656338613966623632613637656137393434303031633230363661 Jan 24 12:15:42.147944 containerd[1599]: time="2026-01-24T12:15:42.147664832Z" level=info msg="StartContainer for \"69ec8a9fb62a67ea7944001c2066a6801796d110d9735749d6b541617d5260cd\" returns successfully" Jan 24 12:15:42.322059 kubelet[2795]: E0124 12:15:42.321715 2795 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:15:42.323644 containerd[1599]: time="2026-01-24T12:15:42.323478566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5rqq9,Uid:8adc59a5-58dd-4725-9866-6432e45b7341,Namespace:kube-system,Attempt:0,}" Jan 24 12:15:42.323916 containerd[1599]: time="2026-01-24T12:15:42.323713571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vfhsw,Uid:d101ff7c-9560-44ae-a339-4a5dc1053aeb,Namespace:calico-system,Attempt:0,}" Jan 24 12:15:42.485504 systemd-networkd[1516]: calic8d2d72ecc7: Gained IPv6LL Jan 24 12:15:42.533799 systemd-networkd[1516]: cali549c391cb6d: Link UP Jan 24 12:15:42.536455 systemd-networkd[1516]: cali549c391cb6d: Gained carrier Jan 24 12:15:42.559468 containerd[1599]: 2026-01-24 12:15:42.390 [INFO][4218] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 12:15:42.559468 containerd[1599]: 2026-01-24 12:15:42.406 [INFO][4218] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--vfhsw-eth0 csi-node-driver- calico-system d101ff7c-9560-44ae-a339-4a5dc1053aeb 757 0 2026-01-24 12:15:22 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-vfhsw eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali549c391cb6d [] [] }} ContainerID="9207390687ce9d2b18474dafde636b39504fefb8d96dc7a08f077c5615166388" Namespace="calico-system" Pod="csi-node-driver-vfhsw" WorkloadEndpoint="localhost-k8s-csi--node--driver--vfhsw-" Jan 24 12:15:42.559468 containerd[1599]: 2026-01-24 12:15:42.406 [INFO][4218] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9207390687ce9d2b18474dafde636b39504fefb8d96dc7a08f077c5615166388" Namespace="calico-system" Pod="csi-node-driver-vfhsw" WorkloadEndpoint="localhost-k8s-csi--node--driver--vfhsw-eth0" Jan 24 12:15:42.559468 containerd[1599]: 2026-01-24 12:15:42.457 [INFO][4246] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9207390687ce9d2b18474dafde636b39504fefb8d96dc7a08f077c5615166388" HandleID="k8s-pod-network.9207390687ce9d2b18474dafde636b39504fefb8d96dc7a08f077c5615166388" Workload="localhost-k8s-csi--node--driver--vfhsw-eth0" Jan 24 12:15:42.559468 containerd[1599]: 2026-01-24 12:15:42.458 [INFO][4246] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9207390687ce9d2b18474dafde636b39504fefb8d96dc7a08f077c5615166388" HandleID="k8s-pod-network.9207390687ce9d2b18474dafde636b39504fefb8d96dc7a08f077c5615166388" Workload="localhost-k8s-csi--node--driver--vfhsw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e78a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-vfhsw", "timestamp":"2026-01-24 12:15:42.457653833 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 12:15:42.559468 containerd[1599]: 2026-01-24 12:15:42.458 [INFO][4246] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 12:15:42.559468 containerd[1599]: 2026-01-24 12:15:42.458 [INFO][4246] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 12:15:42.559468 containerd[1599]: 2026-01-24 12:15:42.458 [INFO][4246] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 12:15:42.559468 containerd[1599]: 2026-01-24 12:15:42.471 [INFO][4246] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9207390687ce9d2b18474dafde636b39504fefb8d96dc7a08f077c5615166388" host="localhost" Jan 24 12:15:42.559468 containerd[1599]: 2026-01-24 12:15:42.482 [INFO][4246] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 12:15:42.559468 containerd[1599]: 2026-01-24 12:15:42.493 [INFO][4246] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 12:15:42.559468 containerd[1599]: 2026-01-24 12:15:42.496 [INFO][4246] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 12:15:42.559468 containerd[1599]: 2026-01-24 12:15:42.500 [INFO][4246] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 12:15:42.559468 containerd[1599]: 2026-01-24 12:15:42.500 [INFO][4246] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9207390687ce9d2b18474dafde636b39504fefb8d96dc7a08f077c5615166388" host="localhost" Jan 24 12:15:42.559468 containerd[1599]: 2026-01-24 12:15:42.503 [INFO][4246] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9207390687ce9d2b18474dafde636b39504fefb8d96dc7a08f077c5615166388 Jan 24 12:15:42.559468 containerd[1599]: 2026-01-24 12:15:42.514 [INFO][4246] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9207390687ce9d2b18474dafde636b39504fefb8d96dc7a08f077c5615166388" host="localhost" Jan 24 12:15:42.559468 containerd[1599]: 2026-01-24 12:15:42.524 [INFO][4246] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.9207390687ce9d2b18474dafde636b39504fefb8d96dc7a08f077c5615166388" host="localhost" Jan 24 12:15:42.559468 containerd[1599]: 2026-01-24 12:15:42.524 [INFO][4246] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.9207390687ce9d2b18474dafde636b39504fefb8d96dc7a08f077c5615166388" host="localhost" Jan 24 12:15:42.559468 containerd[1599]: 2026-01-24 12:15:42.524 [INFO][4246] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 12:15:42.559468 containerd[1599]: 2026-01-24 12:15:42.524 [INFO][4246] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="9207390687ce9d2b18474dafde636b39504fefb8d96dc7a08f077c5615166388" HandleID="k8s-pod-network.9207390687ce9d2b18474dafde636b39504fefb8d96dc7a08f077c5615166388" Workload="localhost-k8s-csi--node--driver--vfhsw-eth0" Jan 24 12:15:42.560289 containerd[1599]: 2026-01-24 12:15:42.527 [INFO][4218] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9207390687ce9d2b18474dafde636b39504fefb8d96dc7a08f077c5615166388" Namespace="calico-system" Pod="csi-node-driver-vfhsw" WorkloadEndpoint="localhost-k8s-csi--node--driver--vfhsw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vfhsw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d101ff7c-9560-44ae-a339-4a5dc1053aeb", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 12, 15, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-vfhsw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali549c391cb6d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 12:15:42.560289 containerd[1599]: 2026-01-24 12:15:42.527 [INFO][4218] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="9207390687ce9d2b18474dafde636b39504fefb8d96dc7a08f077c5615166388" Namespace="calico-system" Pod="csi-node-driver-vfhsw" WorkloadEndpoint="localhost-k8s-csi--node--driver--vfhsw-eth0" Jan 24 12:15:42.560289 containerd[1599]: 2026-01-24 12:15:42.527 [INFO][4218] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali549c391cb6d ContainerID="9207390687ce9d2b18474dafde636b39504fefb8d96dc7a08f077c5615166388" Namespace="calico-system" Pod="csi-node-driver-vfhsw" WorkloadEndpoint="localhost-k8s-csi--node--driver--vfhsw-eth0" Jan 24 12:15:42.560289 containerd[1599]: 2026-01-24 12:15:42.538 [INFO][4218] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9207390687ce9d2b18474dafde636b39504fefb8d96dc7a08f077c5615166388" Namespace="calico-system" Pod="csi-node-driver-vfhsw" WorkloadEndpoint="localhost-k8s-csi--node--driver--vfhsw-eth0" Jan 24 12:15:42.560289 containerd[1599]: 2026-01-24 12:15:42.539 [INFO][4218] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9207390687ce9d2b18474dafde636b39504fefb8d96dc7a08f077c5615166388" Namespace="calico-system" Pod="csi-node-driver-vfhsw" WorkloadEndpoint="localhost-k8s-csi--node--driver--vfhsw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vfhsw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d101ff7c-9560-44ae-a339-4a5dc1053aeb", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 12, 15, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9207390687ce9d2b18474dafde636b39504fefb8d96dc7a08f077c5615166388", Pod:"csi-node-driver-vfhsw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali549c391cb6d", MAC:"12:18:89:75:03:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 12:15:42.560289 containerd[1599]: 2026-01-24 12:15:42.555 [INFO][4218] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9207390687ce9d2b18474dafde636b39504fefb8d96dc7a08f077c5615166388" Namespace="calico-system" Pod="csi-node-driver-vfhsw" WorkloadEndpoint="localhost-k8s-csi--node--driver--vfhsw-eth0" Jan 24 12:15:42.582291 kubelet[2795]: E0124 12:15:42.582259 2795 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:15:42.588422 kubelet[2795]: E0124 12:15:42.588267 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c54756445-jqv9f" podUID="e91bb632-3654-4ec6-9f05-a5289f173ff2" Jan 24 12:15:42.630867 containerd[1599]: time="2026-01-24T12:15:42.630767081Z" level=info msg="connecting to shim 9207390687ce9d2b18474dafde636b39504fefb8d96dc7a08f077c5615166388" address="unix:///run/containerd/s/78b618ff57360513809c5b18c811ecbcc5ca014e27cf04ff095005a4c2b28162" namespace=k8s.io protocol=ttrpc version=3 Jan 24 12:15:42.662637 kubelet[2795]: I0124 12:15:42.658827 2795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-8xp4w" podStartSLOduration=34.658810668 podStartE2EDuration="34.658810668s" podCreationTimestamp="2026-01-24 12:15:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 12:15:42.623015675 +0000 UTC m=+39.487457987" watchObservedRunningTime="2026-01-24 12:15:42.658810668 +0000 UTC m=+39.523252980" Jan 24 12:15:42.704752 systemd[1]: Started cri-containerd-9207390687ce9d2b18474dafde636b39504fefb8d96dc7a08f077c5615166388.scope - libcontainer container 9207390687ce9d2b18474dafde636b39504fefb8d96dc7a08f077c5615166388. Jan 24 12:15:42.711000 audit[4304]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=4304 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:15:42.711000 audit[4304]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff7e8190d0 a2=0 a3=7fff7e8190bc items=0 ppid=2961 pid=4304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:42.711000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:15:42.719640 systemd-networkd[1516]: cali12899d0b881: Link UP Jan 24 12:15:42.719890 systemd-networkd[1516]: cali12899d0b881: Gained carrier Jan 24 12:15:42.719000 audit[4304]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=4304 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:15:42.719000 audit[4304]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff7e8190d0 a2=0 a3=0 items=0 ppid=2961 pid=4304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:42.719000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:15:42.756183 containerd[1599]: 2026-01-24 12:15:42.384 [INFO][4216] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 12:15:42.756183 containerd[1599]: 2026-01-24 12:15:42.407 [INFO][4216] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--5rqq9-eth0 coredns-668d6bf9bc- kube-system 8adc59a5-58dd-4725-9866-6432e45b7341 858 0 2026-01-24 12:15:08 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-5rqq9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali12899d0b881 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6eb45402e3aef1af37fea9dff46aaaa7a6488c0583396045d36d12f312ae3064" Namespace="kube-system" Pod="coredns-668d6bf9bc-5rqq9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5rqq9-" Jan 24 12:15:42.756183 containerd[1599]: 2026-01-24 12:15:42.408 [INFO][4216] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6eb45402e3aef1af37fea9dff46aaaa7a6488c0583396045d36d12f312ae3064" Namespace="kube-system" Pod="coredns-668d6bf9bc-5rqq9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5rqq9-eth0" Jan 24 12:15:42.756183 containerd[1599]: 2026-01-24 12:15:42.461 [INFO][4248] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6eb45402e3aef1af37fea9dff46aaaa7a6488c0583396045d36d12f312ae3064" HandleID="k8s-pod-network.6eb45402e3aef1af37fea9dff46aaaa7a6488c0583396045d36d12f312ae3064" Workload="localhost-k8s-coredns--668d6bf9bc--5rqq9-eth0" Jan 24 12:15:42.756183 containerd[1599]: 2026-01-24 12:15:42.461 [INFO][4248] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6eb45402e3aef1af37fea9dff46aaaa7a6488c0583396045d36d12f312ae3064" HandleID="k8s-pod-network.6eb45402e3aef1af37fea9dff46aaaa7a6488c0583396045d36d12f312ae3064" Workload="localhost-k8s-coredns--668d6bf9bc--5rqq9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f760), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-5rqq9", "timestamp":"2026-01-24 12:15:42.461655083 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 12:15:42.756183 containerd[1599]: 2026-01-24 12:15:42.461 [INFO][4248] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 12:15:42.756183 containerd[1599]: 2026-01-24 12:15:42.524 [INFO][4248] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 12:15:42.756183 containerd[1599]: 2026-01-24 12:15:42.525 [INFO][4248] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 12:15:42.756183 containerd[1599]: 2026-01-24 12:15:42.572 [INFO][4248] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6eb45402e3aef1af37fea9dff46aaaa7a6488c0583396045d36d12f312ae3064" host="localhost" Jan 24 12:15:42.756183 containerd[1599]: 2026-01-24 12:15:42.595 [INFO][4248] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 12:15:42.756183 containerd[1599]: 2026-01-24 12:15:42.607 [INFO][4248] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 12:15:42.756183 containerd[1599]: 2026-01-24 12:15:42.625 [INFO][4248] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 12:15:42.756183 containerd[1599]: 2026-01-24 12:15:42.649 [INFO][4248] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 12:15:42.756183 containerd[1599]: 2026-01-24 12:15:42.653 [INFO][4248] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6eb45402e3aef1af37fea9dff46aaaa7a6488c0583396045d36d12f312ae3064" host="localhost" Jan 24 12:15:42.756183 containerd[1599]: 2026-01-24 12:15:42.660 [INFO][4248] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6eb45402e3aef1af37fea9dff46aaaa7a6488c0583396045d36d12f312ae3064 Jan 24 12:15:42.756183 containerd[1599]: 2026-01-24 12:15:42.687 [INFO][4248] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6eb45402e3aef1af37fea9dff46aaaa7a6488c0583396045d36d12f312ae3064" host="localhost" Jan 24 12:15:42.756183 containerd[1599]: 2026-01-24 12:15:42.705 [INFO][4248] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.6eb45402e3aef1af37fea9dff46aaaa7a6488c0583396045d36d12f312ae3064" host="localhost" Jan 24 12:15:42.756183 containerd[1599]: 2026-01-24 12:15:42.705 [INFO][4248] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.6eb45402e3aef1af37fea9dff46aaaa7a6488c0583396045d36d12f312ae3064" host="localhost" Jan 24 12:15:42.756183 containerd[1599]: 2026-01-24 12:15:42.705 [INFO][4248] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 12:15:42.756183 containerd[1599]: 2026-01-24 12:15:42.705 [INFO][4248] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="6eb45402e3aef1af37fea9dff46aaaa7a6488c0583396045d36d12f312ae3064" HandleID="k8s-pod-network.6eb45402e3aef1af37fea9dff46aaaa7a6488c0583396045d36d12f312ae3064" Workload="localhost-k8s-coredns--668d6bf9bc--5rqq9-eth0" Jan 24 12:15:42.756961 containerd[1599]: 2026-01-24 12:15:42.712 [INFO][4216] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6eb45402e3aef1af37fea9dff46aaaa7a6488c0583396045d36d12f312ae3064" Namespace="kube-system" Pod="coredns-668d6bf9bc-5rqq9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5rqq9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--5rqq9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8adc59a5-58dd-4725-9866-6432e45b7341", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 12, 15, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-5rqq9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali12899d0b881", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 12:15:42.756961 containerd[1599]: 2026-01-24 12:15:42.714 [INFO][4216] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="6eb45402e3aef1af37fea9dff46aaaa7a6488c0583396045d36d12f312ae3064" Namespace="kube-system" Pod="coredns-668d6bf9bc-5rqq9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5rqq9-eth0" Jan 24 12:15:42.756961 containerd[1599]: 2026-01-24 12:15:42.714 [INFO][4216] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali12899d0b881 ContainerID="6eb45402e3aef1af37fea9dff46aaaa7a6488c0583396045d36d12f312ae3064" Namespace="kube-system" Pod="coredns-668d6bf9bc-5rqq9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5rqq9-eth0" Jan 24 12:15:42.756961 containerd[1599]: 2026-01-24 12:15:42.720 [INFO][4216] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6eb45402e3aef1af37fea9dff46aaaa7a6488c0583396045d36d12f312ae3064" Namespace="kube-system" Pod="coredns-668d6bf9bc-5rqq9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5rqq9-eth0" Jan 24 12:15:42.756961 containerd[1599]: 2026-01-24 12:15:42.722 [INFO][4216] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6eb45402e3aef1af37fea9dff46aaaa7a6488c0583396045d36d12f312ae3064" Namespace="kube-system" Pod="coredns-668d6bf9bc-5rqq9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5rqq9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--5rqq9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8adc59a5-58dd-4725-9866-6432e45b7341", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 12, 15, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6eb45402e3aef1af37fea9dff46aaaa7a6488c0583396045d36d12f312ae3064", Pod:"coredns-668d6bf9bc-5rqq9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali12899d0b881", MAC:"d6:eb:c8:38:97:cf", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 12:15:42.756961 containerd[1599]: 2026-01-24 12:15:42.743 [INFO][4216] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6eb45402e3aef1af37fea9dff46aaaa7a6488c0583396045d36d12f312ae3064" Namespace="kube-system" Pod="coredns-668d6bf9bc-5rqq9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5rqq9-eth0" Jan 24 12:15:42.770000 audit: BPF prog-id=190 op=LOAD Jan 24 12:15:42.772000 audit: BPF prog-id=191 op=LOAD Jan 24 12:15:42.772000 audit[4291]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=4280 pid=4291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:42.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932303733393036383763653964326231383437346461666465363336 Jan 24 12:15:42.772000 audit: BPF prog-id=191 op=UNLOAD Jan 24 12:15:42.772000 audit[4291]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4280 pid=4291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:42.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932303733393036383763653964326231383437346461666465363336 Jan 24 12:15:42.773000 audit: BPF prog-id=192 op=LOAD Jan 24 12:15:42.773000 audit[4291]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=4280 pid=4291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:42.773000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932303733393036383763653964326231383437346461666465363336 Jan 24 12:15:42.773000 audit: BPF prog-id=193 op=LOAD Jan 24 12:15:42.773000 audit[4291]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=4280 pid=4291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:42.773000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932303733393036383763653964326231383437346461666465363336 Jan 24 12:15:42.773000 audit: BPF prog-id=193 op=UNLOAD Jan 24 12:15:42.773000 audit[4291]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4280 pid=4291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:42.773000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932303733393036383763653964326231383437346461666465363336 Jan 24 12:15:42.773000 audit: BPF prog-id=192 op=UNLOAD Jan 24 12:15:42.773000 audit[4291]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4280 pid=4291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:42.773000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932303733393036383763653964326231383437346461666465363336 Jan 24 12:15:42.774000 audit: BPF prog-id=194 op=LOAD Jan 24 12:15:42.774000 audit[4291]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=4280 pid=4291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:42.774000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932303733393036383763653964326231383437346461666465363336 Jan 24 12:15:42.779017 systemd-resolved[1286]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 12:15:42.782000 audit[4320]: NETFILTER_CFG table=filter:119 family=2 entries=19 op=nft_register_rule pid=4320 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:15:42.782000 audit[4320]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fffa9da27d0 a2=0 a3=7fffa9da27bc items=0 ppid=2961 pid=4320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:42.782000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:15:42.790000 audit[4320]: NETFILTER_CFG table=nat:120 family=2 entries=33 op=nft_register_chain pid=4320 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:15:42.790000 audit[4320]: SYSCALL arch=c000003e syscall=46 success=yes exit=13428 a0=3 a1=7fffa9da27d0 a2=0 a3=7fffa9da27bc items=0 ppid=2961 pid=4320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:42.790000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:15:42.807186 containerd[1599]: time="2026-01-24T12:15:42.806851319Z" level=info msg="connecting to shim 6eb45402e3aef1af37fea9dff46aaaa7a6488c0583396045d36d12f312ae3064" address="unix:///run/containerd/s/a8db5f203afcbfa4af88b75270948550f20414464b73f2597233c545d1d2621b" namespace=k8s.io protocol=ttrpc version=3 Jan 24 12:15:42.835175 containerd[1599]: time="2026-01-24T12:15:42.834607373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vfhsw,Uid:d101ff7c-9560-44ae-a339-4a5dc1053aeb,Namespace:calico-system,Attempt:0,} returns sandbox id \"9207390687ce9d2b18474dafde636b39504fefb8d96dc7a08f077c5615166388\"" Jan 24 12:15:42.840712 containerd[1599]: time="2026-01-24T12:15:42.840497410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 12:15:42.863734 systemd[1]: Started cri-containerd-6eb45402e3aef1af37fea9dff46aaaa7a6488c0583396045d36d12f312ae3064.scope - libcontainer container 6eb45402e3aef1af37fea9dff46aaaa7a6488c0583396045d36d12f312ae3064. Jan 24 12:15:42.901000 audit: BPF prog-id=195 op=LOAD Jan 24 12:15:42.903000 audit: BPF prog-id=196 op=LOAD Jan 24 12:15:42.903000 audit[4347]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4330 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:42.903000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665623435343032653361656631616633376665613964666634366161 Jan 24 12:15:42.903000 audit: BPF prog-id=196 op=UNLOAD Jan 24 12:15:42.903000 audit[4347]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4330 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:42.903000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665623435343032653361656631616633376665613964666634366161 Jan 24 12:15:42.904000 audit: BPF prog-id=197 op=LOAD Jan 24 12:15:42.904000 audit[4347]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4330 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:42.904000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665623435343032653361656631616633376665613964666634366161 Jan 24 12:15:42.904000 audit: BPF prog-id=198 op=LOAD Jan 24 12:15:42.904000 audit[4347]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4330 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:42.904000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665623435343032653361656631616633376665613964666634366161 Jan 24 12:15:42.904000 audit: BPF prog-id=198 op=UNLOAD Jan 24 12:15:42.904000 audit[4347]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4330 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:42.904000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665623435343032653361656631616633376665613964666634366161 Jan 24 12:15:42.904000 audit: BPF prog-id=197 op=UNLOAD Jan 24 12:15:42.904000 audit[4347]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4330 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:42.904000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665623435343032653361656631616633376665613964666634366161 Jan 24 12:15:42.904000 audit: BPF prog-id=199 op=LOAD Jan 24 12:15:42.904000 audit[4347]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4330 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:42.904000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665623435343032653361656631616633376665613964666634366161 Jan 24 12:15:42.908671 containerd[1599]: time="2026-01-24T12:15:42.907070270Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:15:42.914330 systemd-resolved[1286]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 12:15:42.916587 containerd[1599]: time="2026-01-24T12:15:42.916503748Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 24 12:15:42.919434 containerd[1599]: time="2026-01-24T12:15:42.918294870Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 12:15:42.922469 kubelet[2795]: E0124 12:15:42.922309 2795 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 12:15:42.924426 kubelet[2795]: E0124 12:15:42.923282 2795 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 12:15:42.924768 kubelet[2795]: E0124 12:15:42.924714 2795 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-brds4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vfhsw_calico-system(d101ff7c-9560-44ae-a339-4a5dc1053aeb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 12:15:42.928473 containerd[1599]: time="2026-01-24T12:15:42.928418036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 12:15:42.994587 containerd[1599]: time="2026-01-24T12:15:42.994364896Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:15:42.996905 containerd[1599]: time="2026-01-24T12:15:42.996384848Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 12:15:42.999469 containerd[1599]: time="2026-01-24T12:15:42.998055005Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 24 12:15:43.000498 kubelet[2795]: E0124 12:15:43.000474 2795 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 12:15:43.000666 kubelet[2795]: E0124 12:15:43.000649 2795 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 12:15:43.000998 kubelet[2795]: E0124 12:15:43.000960 2795 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-brds4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vfhsw_calico-system(d101ff7c-9560-44ae-a339-4a5dc1053aeb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 12:15:43.003017 kubelet[2795]: E0124 12:15:43.002694 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vfhsw" podUID="d101ff7c-9560-44ae-a339-4a5dc1053aeb" Jan 24 12:15:43.023576 containerd[1599]: time="2026-01-24T12:15:43.023211929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5rqq9,Uid:8adc59a5-58dd-4725-9866-6432e45b7341,Namespace:kube-system,Attempt:0,} returns sandbox id \"6eb45402e3aef1af37fea9dff46aaaa7a6488c0583396045d36d12f312ae3064\"" Jan 24 12:15:43.028603 kubelet[2795]: E0124 12:15:43.028454 2795 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:15:43.032652 containerd[1599]: time="2026-01-24T12:15:43.032603057Z" level=info msg="CreateContainer within sandbox \"6eb45402e3aef1af37fea9dff46aaaa7a6488c0583396045d36d12f312ae3064\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 12:15:43.048918 containerd[1599]: time="2026-01-24T12:15:43.048756531Z" level=info msg="Container 7db11685738ed80394ecee49bb1c41d419189021e932dc080e9b5357d1acc35f: CDI devices from CRI Config.CDIDevices: []" Jan 24 12:15:43.067996 containerd[1599]: time="2026-01-24T12:15:43.067701493Z" level=info msg="CreateContainer within sandbox \"6eb45402e3aef1af37fea9dff46aaaa7a6488c0583396045d36d12f312ae3064\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7db11685738ed80394ecee49bb1c41d419189021e932dc080e9b5357d1acc35f\"" Jan 24 12:15:43.070904 containerd[1599]: time="2026-01-24T12:15:43.070516468Z" level=info msg="StartContainer for \"7db11685738ed80394ecee49bb1c41d419189021e932dc080e9b5357d1acc35f\"" Jan 24 12:15:43.071740 containerd[1599]: time="2026-01-24T12:15:43.071596002Z" level=info msg="connecting to shim 7db11685738ed80394ecee49bb1c41d419189021e932dc080e9b5357d1acc35f" address="unix:///run/containerd/s/a8db5f203afcbfa4af88b75270948550f20414464b73f2597233c545d1d2621b" protocol=ttrpc version=3 Jan 24 12:15:43.103352 systemd[1]: Started cri-containerd-7db11685738ed80394ecee49bb1c41d419189021e932dc080e9b5357d1acc35f.scope - libcontainer container 7db11685738ed80394ecee49bb1c41d419189021e932dc080e9b5357d1acc35f. Jan 24 12:15:43.124369 systemd-networkd[1516]: cali6b650bd685c: Gained IPv6LL Jan 24 12:15:43.126000 audit: BPF prog-id=200 op=LOAD Jan 24 12:15:43.128000 audit: BPF prog-id=201 op=LOAD Jan 24 12:15:43.128000 audit[4399]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4330 pid=4399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:43.128000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764623131363835373338656438303339346563656534396262316334 Jan 24 12:15:43.128000 audit: BPF prog-id=201 op=UNLOAD Jan 24 12:15:43.128000 audit[4399]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4330 pid=4399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:43.128000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764623131363835373338656438303339346563656534396262316334 Jan 24 12:15:43.128000 audit: BPF prog-id=202 op=LOAD Jan 24 12:15:43.128000 audit[4399]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4330 pid=4399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:43.128000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764623131363835373338656438303339346563656534396262316334 Jan 24 12:15:43.128000 audit: BPF prog-id=203 op=LOAD Jan 24 12:15:43.128000 audit[4399]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4330 pid=4399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:43.128000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764623131363835373338656438303339346563656534396262316334 Jan 24 12:15:43.128000 audit: BPF prog-id=203 op=UNLOAD Jan 24 12:15:43.128000 audit[4399]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4330 pid=4399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:43.128000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764623131363835373338656438303339346563656534396262316334 Jan 24 12:15:43.128000 audit: BPF prog-id=202 op=UNLOAD Jan 24 12:15:43.128000 audit[4399]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4330 pid=4399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:43.128000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764623131363835373338656438303339346563656534396262316334 Jan 24 12:15:43.128000 audit: BPF prog-id=204 op=LOAD Jan 24 12:15:43.128000 audit[4399]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4330 pid=4399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:43.128000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764623131363835373338656438303339346563656534396262316334 Jan 24 12:15:43.170764 containerd[1599]: time="2026-01-24T12:15:43.170463927Z" level=info msg="StartContainer for \"7db11685738ed80394ecee49bb1c41d419189021e932dc080e9b5357d1acc35f\" returns successfully" Jan 24 12:15:43.321208 containerd[1599]: time="2026-01-24T12:15:43.321055871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8479996d5-hmtzg,Uid:1e987738-bc46-4583-b000-c8f1bfbb02a7,Namespace:calico-apiserver,Attempt:0,}" Jan 24 12:15:43.543977 systemd-networkd[1516]: cali8f3ec2fe7da: Link UP Jan 24 12:15:43.544791 systemd-networkd[1516]: cali8f3ec2fe7da: Gained carrier Jan 24 12:15:43.566638 containerd[1599]: 2026-01-24 12:15:43.376 [INFO][4433] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 12:15:43.566638 containerd[1599]: 2026-01-24 12:15:43.401 [INFO][4433] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--8479996d5--hmtzg-eth0 calico-apiserver-8479996d5- calico-apiserver 1e987738-bc46-4583-b000-c8f1bfbb02a7 854 0 2026-01-24 12:15:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8479996d5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-8479996d5-hmtzg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8f3ec2fe7da [] [] }} ContainerID="5c4d847d06d92d2f394a98b612d1fb0ff414a4081200d1c7866e757d32ac2cd9" Namespace="calico-apiserver" Pod="calico-apiserver-8479996d5-hmtzg" WorkloadEndpoint="localhost-k8s-calico--apiserver--8479996d5--hmtzg-" Jan 24 12:15:43.566638 containerd[1599]: 2026-01-24 12:15:43.401 [INFO][4433] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5c4d847d06d92d2f394a98b612d1fb0ff414a4081200d1c7866e757d32ac2cd9" Namespace="calico-apiserver" Pod="calico-apiserver-8479996d5-hmtzg" WorkloadEndpoint="localhost-k8s-calico--apiserver--8479996d5--hmtzg-eth0" Jan 24 12:15:43.566638 containerd[1599]: 2026-01-24 12:15:43.447 [INFO][4447] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5c4d847d06d92d2f394a98b612d1fb0ff414a4081200d1c7866e757d32ac2cd9" HandleID="k8s-pod-network.5c4d847d06d92d2f394a98b612d1fb0ff414a4081200d1c7866e757d32ac2cd9" Workload="localhost-k8s-calico--apiserver--8479996d5--hmtzg-eth0" Jan 24 12:15:43.566638 containerd[1599]: 2026-01-24 12:15:43.447 [INFO][4447] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5c4d847d06d92d2f394a98b612d1fb0ff414a4081200d1c7866e757d32ac2cd9" HandleID="k8s-pod-network.5c4d847d06d92d2f394a98b612d1fb0ff414a4081200d1c7866e757d32ac2cd9" Workload="localhost-k8s-calico--apiserver--8479996d5--hmtzg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f3f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-8479996d5-hmtzg", "timestamp":"2026-01-24 12:15:43.44747018 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 12:15:43.566638 containerd[1599]: 2026-01-24 12:15:43.448 [INFO][4447] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 12:15:43.566638 containerd[1599]: 2026-01-24 12:15:43.448 [INFO][4447] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 12:15:43.566638 containerd[1599]: 2026-01-24 12:15:43.448 [INFO][4447] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 12:15:43.566638 containerd[1599]: 2026-01-24 12:15:43.459 [INFO][4447] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5c4d847d06d92d2f394a98b612d1fb0ff414a4081200d1c7866e757d32ac2cd9" host="localhost" Jan 24 12:15:43.566638 containerd[1599]: 2026-01-24 12:15:43.467 [INFO][4447] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 12:15:43.566638 containerd[1599]: 2026-01-24 12:15:43.475 [INFO][4447] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 12:15:43.566638 containerd[1599]: 2026-01-24 12:15:43.479 [INFO][4447] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 12:15:43.566638 containerd[1599]: 2026-01-24 12:15:43.482 [INFO][4447] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 12:15:43.566638 containerd[1599]: 2026-01-24 12:15:43.482 [INFO][4447] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5c4d847d06d92d2f394a98b612d1fb0ff414a4081200d1c7866e757d32ac2cd9" host="localhost" Jan 24 12:15:43.566638 containerd[1599]: 2026-01-24 12:15:43.485 [INFO][4447] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5c4d847d06d92d2f394a98b612d1fb0ff414a4081200d1c7866e757d32ac2cd9 Jan 24 12:15:43.566638 containerd[1599]: 2026-01-24 12:15:43.524 [INFO][4447] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5c4d847d06d92d2f394a98b612d1fb0ff414a4081200d1c7866e757d32ac2cd9" host="localhost" Jan 24 12:15:43.566638 containerd[1599]: 2026-01-24 12:15:43.536 [INFO][4447] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.5c4d847d06d92d2f394a98b612d1fb0ff414a4081200d1c7866e757d32ac2cd9" host="localhost" Jan 24 12:15:43.566638 containerd[1599]: 2026-01-24 12:15:43.536 [INFO][4447] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.5c4d847d06d92d2f394a98b612d1fb0ff414a4081200d1c7866e757d32ac2cd9" host="localhost" Jan 24 12:15:43.566638 containerd[1599]: 2026-01-24 12:15:43.536 [INFO][4447] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 12:15:43.566638 containerd[1599]: 2026-01-24 12:15:43.536 [INFO][4447] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="5c4d847d06d92d2f394a98b612d1fb0ff414a4081200d1c7866e757d32ac2cd9" HandleID="k8s-pod-network.5c4d847d06d92d2f394a98b612d1fb0ff414a4081200d1c7866e757d32ac2cd9" Workload="localhost-k8s-calico--apiserver--8479996d5--hmtzg-eth0" Jan 24 12:15:43.567316 containerd[1599]: 2026-01-24 12:15:43.540 [INFO][4433] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5c4d847d06d92d2f394a98b612d1fb0ff414a4081200d1c7866e757d32ac2cd9" Namespace="calico-apiserver" Pod="calico-apiserver-8479996d5-hmtzg" WorkloadEndpoint="localhost-k8s-calico--apiserver--8479996d5--hmtzg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8479996d5--hmtzg-eth0", GenerateName:"calico-apiserver-8479996d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"1e987738-bc46-4583-b000-c8f1bfbb02a7", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 12, 15, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8479996d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-8479996d5-hmtzg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8f3ec2fe7da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 12:15:43.567316 containerd[1599]: 2026-01-24 12:15:43.540 [INFO][4433] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="5c4d847d06d92d2f394a98b612d1fb0ff414a4081200d1c7866e757d32ac2cd9" Namespace="calico-apiserver" Pod="calico-apiserver-8479996d5-hmtzg" WorkloadEndpoint="localhost-k8s-calico--apiserver--8479996d5--hmtzg-eth0" Jan 24 12:15:43.567316 containerd[1599]: 2026-01-24 12:15:43.541 [INFO][4433] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8f3ec2fe7da ContainerID="5c4d847d06d92d2f394a98b612d1fb0ff414a4081200d1c7866e757d32ac2cd9" Namespace="calico-apiserver" Pod="calico-apiserver-8479996d5-hmtzg" WorkloadEndpoint="localhost-k8s-calico--apiserver--8479996d5--hmtzg-eth0" Jan 24 12:15:43.567316 containerd[1599]: 2026-01-24 12:15:43.544 [INFO][4433] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5c4d847d06d92d2f394a98b612d1fb0ff414a4081200d1c7866e757d32ac2cd9" Namespace="calico-apiserver" Pod="calico-apiserver-8479996d5-hmtzg" WorkloadEndpoint="localhost-k8s-calico--apiserver--8479996d5--hmtzg-eth0" Jan 24 12:15:43.567316 containerd[1599]: 2026-01-24 12:15:43.544 [INFO][4433] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5c4d847d06d92d2f394a98b612d1fb0ff414a4081200d1c7866e757d32ac2cd9" Namespace="calico-apiserver" Pod="calico-apiserver-8479996d5-hmtzg" WorkloadEndpoint="localhost-k8s-calico--apiserver--8479996d5--hmtzg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8479996d5--hmtzg-eth0", GenerateName:"calico-apiserver-8479996d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"1e987738-bc46-4583-b000-c8f1bfbb02a7", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 12, 15, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8479996d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5c4d847d06d92d2f394a98b612d1fb0ff414a4081200d1c7866e757d32ac2cd9", Pod:"calico-apiserver-8479996d5-hmtzg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8f3ec2fe7da", MAC:"86:d8:8c:6e:66:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 12:15:43.567316 containerd[1599]: 2026-01-24 12:15:43.562 [INFO][4433] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5c4d847d06d92d2f394a98b612d1fb0ff414a4081200d1c7866e757d32ac2cd9" Namespace="calico-apiserver" Pod="calico-apiserver-8479996d5-hmtzg" WorkloadEndpoint="localhost-k8s-calico--apiserver--8479996d5--hmtzg-eth0" Jan 24 12:15:43.572419 systemd-networkd[1516]: cali549c391cb6d: Gained IPv6LL Jan 24 12:15:43.598780 kubelet[2795]: E0124 12:15:43.598715 2795 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:15:43.613786 kubelet[2795]: E0124 12:15:43.613735 2795 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:15:43.615847 kubelet[2795]: E0124 12:15:43.615794 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vfhsw" podUID="d101ff7c-9560-44ae-a339-4a5dc1053aeb" Jan 24 12:15:43.621437 kubelet[2795]: E0124 12:15:43.621248 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c54756445-jqv9f" podUID="e91bb632-3654-4ec6-9f05-a5289f173ff2" Jan 24 12:15:43.624255 containerd[1599]: time="2026-01-24T12:15:43.623903339Z" level=info msg="connecting to shim 5c4d847d06d92d2f394a98b612d1fb0ff414a4081200d1c7866e757d32ac2cd9" address="unix:///run/containerd/s/a9e398b66bee37ca4cda023205566626f15c94db15b7754ad59492cd87868365" namespace=k8s.io protocol=ttrpc version=3 Jan 24 12:15:43.671483 kubelet[2795]: I0124 12:15:43.671310 2795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-5rqq9" podStartSLOduration=35.671295473 podStartE2EDuration="35.671295473s" podCreationTimestamp="2026-01-24 12:15:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 12:15:43.628923714 +0000 UTC m=+40.493366026" watchObservedRunningTime="2026-01-24 12:15:43.671295473 +0000 UTC m=+40.535737785" Jan 24 12:15:43.725213 systemd[1]: Started cri-containerd-5c4d847d06d92d2f394a98b612d1fb0ff414a4081200d1c7866e757d32ac2cd9.scope - libcontainer container 5c4d847d06d92d2f394a98b612d1fb0ff414a4081200d1c7866e757d32ac2cd9. Jan 24 12:15:43.753000 audit: BPF prog-id=205 op=LOAD Jan 24 12:15:43.754000 audit: BPF prog-id=206 op=LOAD Jan 24 12:15:43.754000 audit[4488]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4472 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:43.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563346438343764303664393264326633393461393862363132643166 Jan 24 12:15:43.754000 audit: BPF prog-id=206 op=UNLOAD Jan 24 12:15:43.754000 audit[4488]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4472 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:43.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563346438343764303664393264326633393461393862363132643166 Jan 24 12:15:43.754000 audit: BPF prog-id=207 op=LOAD Jan 24 12:15:43.754000 audit[4488]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4472 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:43.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563346438343764303664393264326633393461393862363132643166 Jan 24 12:15:43.754000 audit: BPF prog-id=208 op=LOAD Jan 24 12:15:43.754000 audit[4488]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4472 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:43.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563346438343764303664393264326633393461393862363132643166 Jan 24 12:15:43.754000 audit: BPF prog-id=208 op=UNLOAD Jan 24 12:15:43.754000 audit[4488]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4472 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:43.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563346438343764303664393264326633393461393862363132643166 Jan 24 12:15:43.754000 audit: BPF prog-id=207 op=UNLOAD Jan 24 12:15:43.754000 audit[4488]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4472 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:43.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563346438343764303664393264326633393461393862363132643166 Jan 24 12:15:43.754000 audit: BPF prog-id=209 op=LOAD Jan 24 12:15:43.754000 audit[4488]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4472 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:43.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563346438343764303664393264326633393461393862363132643166 Jan 24 12:15:43.757341 systemd-resolved[1286]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 12:15:43.768000 audit[4508]: NETFILTER_CFG table=filter:121 family=2 entries=16 op=nft_register_rule pid=4508 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:15:43.768000 audit[4508]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe3642ee80 a2=0 a3=7ffe3642ee6c items=0 ppid=2961 pid=4508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:43.768000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:15:43.786000 audit[4508]: NETFILTER_CFG table=nat:122 family=2 entries=54 op=nft_register_chain pid=4508 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:15:43.786000 audit[4508]: SYSCALL arch=c000003e syscall=46 success=yes exit=19092 a0=3 a1=7ffe3642ee80 a2=0 a3=7ffe3642ee6c items=0 ppid=2961 pid=4508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:43.786000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:15:43.819053 containerd[1599]: time="2026-01-24T12:15:43.818970713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8479996d5-hmtzg,Uid:1e987738-bc46-4583-b000-c8f1bfbb02a7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5c4d847d06d92d2f394a98b612d1fb0ff414a4081200d1c7866e757d32ac2cd9\"" Jan 24 12:15:43.821910 containerd[1599]: time="2026-01-24T12:15:43.821833757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 12:15:43.880073 containerd[1599]: time="2026-01-24T12:15:43.879858042Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:15:43.881806 containerd[1599]: time="2026-01-24T12:15:43.881747759Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 12:15:43.881806 containerd[1599]: time="2026-01-24T12:15:43.881799365Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 12:15:43.882156 kubelet[2795]: E0124 12:15:43.881939 2795 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 12:15:43.882156 kubelet[2795]: E0124 12:15:43.881971 2795 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 12:15:43.886797 kubelet[2795]: E0124 12:15:43.886649 2795 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9sfpf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8479996d5-hmtzg_calico-apiserver(1e987738-bc46-4583-b000-c8f1bfbb02a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 12:15:43.888690 kubelet[2795]: E0124 12:15:43.888520 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8479996d5-hmtzg" podUID="1e987738-bc46-4583-b000-c8f1bfbb02a7" Jan 24 12:15:44.137345 kubelet[2795]: I0124 12:15:44.136894 2795 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 12:15:44.139170 kubelet[2795]: E0124 12:15:44.138697 2795 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:15:44.276603 systemd-networkd[1516]: cali12899d0b881: Gained IPv6LL Jan 24 12:15:44.320700 containerd[1599]: time="2026-01-24T12:15:44.320343971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8479996d5-zttk4,Uid:f06e4a34-950d-4dc0-91e1-512b91c976bf,Namespace:calico-apiserver,Attempt:0,}" Jan 24 12:15:44.322266 containerd[1599]: time="2026-01-24T12:15:44.321063734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79bb998d4d-94tq7,Uid:e2057d14-bb68-40a3-9abf-385c845f08ca,Namespace:calico-system,Attempt:0,}" Jan 24 12:15:44.551062 systemd-networkd[1516]: cali781417e0da8: Link UP Jan 24 12:15:44.552919 systemd-networkd[1516]: cali781417e0da8: Gained carrier Jan 24 12:15:44.576927 containerd[1599]: 2026-01-24 12:15:44.382 [INFO][4561] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 12:15:44.576927 containerd[1599]: 2026-01-24 12:15:44.403 [INFO][4561] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--79bb998d4d--94tq7-eth0 calico-kube-controllers-79bb998d4d- calico-system e2057d14-bb68-40a3-9abf-385c845f08ca 861 0 2026-01-24 12:15:22 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:79bb998d4d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-79bb998d4d-94tq7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali781417e0da8 [] [] }} ContainerID="e6816d2e11c2a455ae9d91ea4fd22fcfe73942f04be199bc2b50bb36f0435739" Namespace="calico-system" Pod="calico-kube-controllers-79bb998d4d-94tq7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79bb998d4d--94tq7-" Jan 24 12:15:44.576927 containerd[1599]: 2026-01-24 12:15:44.403 [INFO][4561] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e6816d2e11c2a455ae9d91ea4fd22fcfe73942f04be199bc2b50bb36f0435739" Namespace="calico-system" Pod="calico-kube-controllers-79bb998d4d-94tq7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79bb998d4d--94tq7-eth0" Jan 24 12:15:44.576927 containerd[1599]: 2026-01-24 12:15:44.452 [INFO][4585] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e6816d2e11c2a455ae9d91ea4fd22fcfe73942f04be199bc2b50bb36f0435739" HandleID="k8s-pod-network.e6816d2e11c2a455ae9d91ea4fd22fcfe73942f04be199bc2b50bb36f0435739" Workload="localhost-k8s-calico--kube--controllers--79bb998d4d--94tq7-eth0" Jan 24 12:15:44.576927 containerd[1599]: 2026-01-24 12:15:44.453 [INFO][4585] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e6816d2e11c2a455ae9d91ea4fd22fcfe73942f04be199bc2b50bb36f0435739" HandleID="k8s-pod-network.e6816d2e11c2a455ae9d91ea4fd22fcfe73942f04be199bc2b50bb36f0435739" Workload="localhost-k8s-calico--kube--controllers--79bb998d4d--94tq7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7310), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-79bb998d4d-94tq7", "timestamp":"2026-01-24 12:15:44.452987554 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 12:15:44.576927 containerd[1599]: 2026-01-24 12:15:44.453 [INFO][4585] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 12:15:44.576927 containerd[1599]: 2026-01-24 12:15:44.453 [INFO][4585] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 12:15:44.576927 containerd[1599]: 2026-01-24 12:15:44.453 [INFO][4585] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 12:15:44.576927 containerd[1599]: 2026-01-24 12:15:44.462 [INFO][4585] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e6816d2e11c2a455ae9d91ea4fd22fcfe73942f04be199bc2b50bb36f0435739" host="localhost" Jan 24 12:15:44.576927 containerd[1599]: 2026-01-24 12:15:44.473 [INFO][4585] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 12:15:44.576927 containerd[1599]: 2026-01-24 12:15:44.482 [INFO][4585] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 12:15:44.576927 containerd[1599]: 2026-01-24 12:15:44.519 [INFO][4585] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 12:15:44.576927 containerd[1599]: 2026-01-24 12:15:44.523 [INFO][4585] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 12:15:44.576927 containerd[1599]: 2026-01-24 12:15:44.523 [INFO][4585] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e6816d2e11c2a455ae9d91ea4fd22fcfe73942f04be199bc2b50bb36f0435739" host="localhost" Jan 24 12:15:44.576927 containerd[1599]: 2026-01-24 12:15:44.526 [INFO][4585] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e6816d2e11c2a455ae9d91ea4fd22fcfe73942f04be199bc2b50bb36f0435739 Jan 24 12:15:44.576927 containerd[1599]: 2026-01-24 12:15:44.533 [INFO][4585] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e6816d2e11c2a455ae9d91ea4fd22fcfe73942f04be199bc2b50bb36f0435739" host="localhost" Jan 24 12:15:44.576927 containerd[1599]: 2026-01-24 12:15:44.544 [INFO][4585] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.e6816d2e11c2a455ae9d91ea4fd22fcfe73942f04be199bc2b50bb36f0435739" host="localhost" Jan 24 12:15:44.576927 containerd[1599]: 2026-01-24 12:15:44.544 [INFO][4585] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.e6816d2e11c2a455ae9d91ea4fd22fcfe73942f04be199bc2b50bb36f0435739" host="localhost" Jan 24 12:15:44.576927 containerd[1599]: 2026-01-24 12:15:44.544 [INFO][4585] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 12:15:44.576927 containerd[1599]: 2026-01-24 12:15:44.544 [INFO][4585] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="e6816d2e11c2a455ae9d91ea4fd22fcfe73942f04be199bc2b50bb36f0435739" HandleID="k8s-pod-network.e6816d2e11c2a455ae9d91ea4fd22fcfe73942f04be199bc2b50bb36f0435739" Workload="localhost-k8s-calico--kube--controllers--79bb998d4d--94tq7-eth0" Jan 24 12:15:44.577780 containerd[1599]: 2026-01-24 12:15:44.547 [INFO][4561] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e6816d2e11c2a455ae9d91ea4fd22fcfe73942f04be199bc2b50bb36f0435739" Namespace="calico-system" Pod="calico-kube-controllers-79bb998d4d-94tq7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79bb998d4d--94tq7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--79bb998d4d--94tq7-eth0", GenerateName:"calico-kube-controllers-79bb998d4d-", Namespace:"calico-system", SelfLink:"", UID:"e2057d14-bb68-40a3-9abf-385c845f08ca", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 12, 15, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79bb998d4d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-79bb998d4d-94tq7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali781417e0da8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 12:15:44.577780 containerd[1599]: 2026-01-24 12:15:44.547 [INFO][4561] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="e6816d2e11c2a455ae9d91ea4fd22fcfe73942f04be199bc2b50bb36f0435739" Namespace="calico-system" Pod="calico-kube-controllers-79bb998d4d-94tq7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79bb998d4d--94tq7-eth0" Jan 24 12:15:44.577780 containerd[1599]: 2026-01-24 12:15:44.547 [INFO][4561] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali781417e0da8 ContainerID="e6816d2e11c2a455ae9d91ea4fd22fcfe73942f04be199bc2b50bb36f0435739" Namespace="calico-system" Pod="calico-kube-controllers-79bb998d4d-94tq7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79bb998d4d--94tq7-eth0" Jan 24 12:15:44.577780 containerd[1599]: 2026-01-24 12:15:44.553 [INFO][4561] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e6816d2e11c2a455ae9d91ea4fd22fcfe73942f04be199bc2b50bb36f0435739" Namespace="calico-system" Pod="calico-kube-controllers-79bb998d4d-94tq7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79bb998d4d--94tq7-eth0" Jan 24 12:15:44.577780 containerd[1599]: 2026-01-24 12:15:44.554 [INFO][4561] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e6816d2e11c2a455ae9d91ea4fd22fcfe73942f04be199bc2b50bb36f0435739" Namespace="calico-system" Pod="calico-kube-controllers-79bb998d4d-94tq7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79bb998d4d--94tq7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--79bb998d4d--94tq7-eth0", GenerateName:"calico-kube-controllers-79bb998d4d-", Namespace:"calico-system", SelfLink:"", UID:"e2057d14-bb68-40a3-9abf-385c845f08ca", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 12, 15, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79bb998d4d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e6816d2e11c2a455ae9d91ea4fd22fcfe73942f04be199bc2b50bb36f0435739", Pod:"calico-kube-controllers-79bb998d4d-94tq7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali781417e0da8", MAC:"92:0a:eb:c8:fb:74", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 12:15:44.577780 containerd[1599]: 2026-01-24 12:15:44.574 [INFO][4561] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e6816d2e11c2a455ae9d91ea4fd22fcfe73942f04be199bc2b50bb36f0435739" Namespace="calico-system" Pod="calico-kube-controllers-79bb998d4d-94tq7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79bb998d4d--94tq7-eth0" Jan 24 12:15:44.622029 containerd[1599]: time="2026-01-24T12:15:44.621964313Z" level=info msg="connecting to shim e6816d2e11c2a455ae9d91ea4fd22fcfe73942f04be199bc2b50bb36f0435739" address="unix:///run/containerd/s/ce891dbdf2fe33d42c250cf01f6ebbf3f1afc610ec3a08d04964c43c36d2507a" namespace=k8s.io protocol=ttrpc version=3 Jan 24 12:15:44.623884 kubelet[2795]: E0124 12:15:44.623742 2795 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:15:44.624302 kubelet[2795]: E0124 12:15:44.623898 2795 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:15:44.626859 kubelet[2795]: E0124 12:15:44.626689 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8479996d5-hmtzg" podUID="1e987738-bc46-4583-b000-c8f1bfbb02a7" Jan 24 12:15:44.628287 kubelet[2795]: E0124 12:15:44.627000 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vfhsw" podUID="d101ff7c-9560-44ae-a339-4a5dc1053aeb" Jan 24 12:15:44.659925 systemd-networkd[1516]: cali181735ed54d: Link UP Jan 24 12:15:44.662337 systemd-networkd[1516]: cali181735ed54d: Gained carrier Jan 24 12:15:44.682334 systemd[1]: Started cri-containerd-e6816d2e11c2a455ae9d91ea4fd22fcfe73942f04be199bc2b50bb36f0435739.scope - libcontainer container e6816d2e11c2a455ae9d91ea4fd22fcfe73942f04be199bc2b50bb36f0435739. Jan 24 12:15:44.707488 containerd[1599]: 2026-01-24 12:15:44.391 [INFO][4569] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 12:15:44.707488 containerd[1599]: 2026-01-24 12:15:44.409 [INFO][4569] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--8479996d5--zttk4-eth0 calico-apiserver-8479996d5- calico-apiserver f06e4a34-950d-4dc0-91e1-512b91c976bf 864 0 2026-01-24 12:15:18 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8479996d5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-8479996d5-zttk4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali181735ed54d [] [] }} ContainerID="0a97c1671b90c45a60270ce793a13879fc9613abc19ed1765378dff4173fc3e0" Namespace="calico-apiserver" Pod="calico-apiserver-8479996d5-zttk4" WorkloadEndpoint="localhost-k8s-calico--apiserver--8479996d5--zttk4-" Jan 24 12:15:44.707488 containerd[1599]: 2026-01-24 12:15:44.409 [INFO][4569] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0a97c1671b90c45a60270ce793a13879fc9613abc19ed1765378dff4173fc3e0" Namespace="calico-apiserver" Pod="calico-apiserver-8479996d5-zttk4" WorkloadEndpoint="localhost-k8s-calico--apiserver--8479996d5--zttk4-eth0" Jan 24 12:15:44.707488 containerd[1599]: 2026-01-24 12:15:44.469 [INFO][4592] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0a97c1671b90c45a60270ce793a13879fc9613abc19ed1765378dff4173fc3e0" HandleID="k8s-pod-network.0a97c1671b90c45a60270ce793a13879fc9613abc19ed1765378dff4173fc3e0" Workload="localhost-k8s-calico--apiserver--8479996d5--zttk4-eth0" Jan 24 12:15:44.707488 containerd[1599]: 2026-01-24 12:15:44.469 [INFO][4592] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0a97c1671b90c45a60270ce793a13879fc9613abc19ed1765378dff4173fc3e0" HandleID="k8s-pod-network.0a97c1671b90c45a60270ce793a13879fc9613abc19ed1765378dff4173fc3e0" Workload="localhost-k8s-calico--apiserver--8479996d5--zttk4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f7f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-8479996d5-zttk4", "timestamp":"2026-01-24 12:15:44.469006174 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 12:15:44.707488 containerd[1599]: 2026-01-24 12:15:44.469 [INFO][4592] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 12:15:44.707488 containerd[1599]: 2026-01-24 12:15:44.545 [INFO][4592] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 12:15:44.707488 containerd[1599]: 2026-01-24 12:15:44.545 [INFO][4592] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 12:15:44.707488 containerd[1599]: 2026-01-24 12:15:44.562 [INFO][4592] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0a97c1671b90c45a60270ce793a13879fc9613abc19ed1765378dff4173fc3e0" host="localhost" Jan 24 12:15:44.707488 containerd[1599]: 2026-01-24 12:15:44.578 [INFO][4592] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 12:15:44.707488 containerd[1599]: 2026-01-24 12:15:44.589 [INFO][4592] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 12:15:44.707488 containerd[1599]: 2026-01-24 12:15:44.593 [INFO][4592] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 12:15:44.707488 containerd[1599]: 2026-01-24 12:15:44.597 [INFO][4592] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 12:15:44.707488 containerd[1599]: 2026-01-24 12:15:44.597 [INFO][4592] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0a97c1671b90c45a60270ce793a13879fc9613abc19ed1765378dff4173fc3e0" host="localhost" Jan 24 12:15:44.707488 containerd[1599]: 2026-01-24 12:15:44.601 [INFO][4592] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0a97c1671b90c45a60270ce793a13879fc9613abc19ed1765378dff4173fc3e0 Jan 24 12:15:44.707488 containerd[1599]: 2026-01-24 12:15:44.614 [INFO][4592] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0a97c1671b90c45a60270ce793a13879fc9613abc19ed1765378dff4173fc3e0" host="localhost" Jan 24 12:15:44.707488 containerd[1599]: 2026-01-24 12:15:44.632 [INFO][4592] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.0a97c1671b90c45a60270ce793a13879fc9613abc19ed1765378dff4173fc3e0" host="localhost" Jan 24 12:15:44.707488 containerd[1599]: 2026-01-24 12:15:44.632 [INFO][4592] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.0a97c1671b90c45a60270ce793a13879fc9613abc19ed1765378dff4173fc3e0" host="localhost" Jan 24 12:15:44.707488 containerd[1599]: 2026-01-24 12:15:44.632 [INFO][4592] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 12:15:44.707488 containerd[1599]: 2026-01-24 12:15:44.632 [INFO][4592] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="0a97c1671b90c45a60270ce793a13879fc9613abc19ed1765378dff4173fc3e0" HandleID="k8s-pod-network.0a97c1671b90c45a60270ce793a13879fc9613abc19ed1765378dff4173fc3e0" Workload="localhost-k8s-calico--apiserver--8479996d5--zttk4-eth0" Jan 24 12:15:44.709195 containerd[1599]: 2026-01-24 12:15:44.644 [INFO][4569] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0a97c1671b90c45a60270ce793a13879fc9613abc19ed1765378dff4173fc3e0" Namespace="calico-apiserver" Pod="calico-apiserver-8479996d5-zttk4" WorkloadEndpoint="localhost-k8s-calico--apiserver--8479996d5--zttk4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8479996d5--zttk4-eth0", GenerateName:"calico-apiserver-8479996d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"f06e4a34-950d-4dc0-91e1-512b91c976bf", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 12, 15, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8479996d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-8479996d5-zttk4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali181735ed54d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 12:15:44.709195 containerd[1599]: 2026-01-24 12:15:44.644 [INFO][4569] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="0a97c1671b90c45a60270ce793a13879fc9613abc19ed1765378dff4173fc3e0" Namespace="calico-apiserver" Pod="calico-apiserver-8479996d5-zttk4" WorkloadEndpoint="localhost-k8s-calico--apiserver--8479996d5--zttk4-eth0" Jan 24 12:15:44.709195 containerd[1599]: 2026-01-24 12:15:44.644 [INFO][4569] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali181735ed54d ContainerID="0a97c1671b90c45a60270ce793a13879fc9613abc19ed1765378dff4173fc3e0" Namespace="calico-apiserver" Pod="calico-apiserver-8479996d5-zttk4" WorkloadEndpoint="localhost-k8s-calico--apiserver--8479996d5--zttk4-eth0" Jan 24 12:15:44.709195 containerd[1599]: 2026-01-24 12:15:44.663 [INFO][4569] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0a97c1671b90c45a60270ce793a13879fc9613abc19ed1765378dff4173fc3e0" Namespace="calico-apiserver" Pod="calico-apiserver-8479996d5-zttk4" WorkloadEndpoint="localhost-k8s-calico--apiserver--8479996d5--zttk4-eth0" Jan 24 12:15:44.709195 containerd[1599]: 2026-01-24 12:15:44.663 [INFO][4569] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0a97c1671b90c45a60270ce793a13879fc9613abc19ed1765378dff4173fc3e0" Namespace="calico-apiserver" Pod="calico-apiserver-8479996d5-zttk4" WorkloadEndpoint="localhost-k8s-calico--apiserver--8479996d5--zttk4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8479996d5--zttk4-eth0", GenerateName:"calico-apiserver-8479996d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"f06e4a34-950d-4dc0-91e1-512b91c976bf", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 12, 15, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8479996d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0a97c1671b90c45a60270ce793a13879fc9613abc19ed1765378dff4173fc3e0", Pod:"calico-apiserver-8479996d5-zttk4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali181735ed54d", MAC:"62:ca:54:cc:b9:fb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 12:15:44.709195 containerd[1599]: 2026-01-24 12:15:44.702 [INFO][4569] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0a97c1671b90c45a60270ce793a13879fc9613abc19ed1765378dff4173fc3e0" Namespace="calico-apiserver" Pod="calico-apiserver-8479996d5-zttk4" WorkloadEndpoint="localhost-k8s-calico--apiserver--8479996d5--zttk4-eth0" Jan 24 12:15:44.711000 audit: BPF prog-id=210 op=LOAD Jan 24 12:15:44.712000 audit: BPF prog-id=211 op=LOAD Jan 24 12:15:44.712000 audit[4633]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4621 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:44.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536383136643265313163326134353561653964393165613466643232 Jan 24 12:15:44.712000 audit: BPF prog-id=211 op=UNLOAD Jan 24 12:15:44.712000 audit[4633]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4621 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:44.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536383136643265313163326134353561653964393165613466643232 Jan 24 12:15:44.712000 audit: BPF prog-id=212 op=LOAD Jan 24 12:15:44.712000 audit[4633]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4621 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:44.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536383136643265313163326134353561653964393165613466643232 Jan 24 12:15:44.712000 audit: BPF prog-id=213 op=LOAD Jan 24 12:15:44.712000 audit[4633]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4621 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:44.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536383136643265313163326134353561653964393165613466643232 Jan 24 12:15:44.712000 audit: BPF prog-id=213 op=UNLOAD Jan 24 12:15:44.712000 audit[4633]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4621 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:44.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536383136643265313163326134353561653964393165613466643232 Jan 24 12:15:44.712000 audit: BPF prog-id=212 op=UNLOAD Jan 24 12:15:44.712000 audit[4633]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4621 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:44.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536383136643265313163326134353561653964393165613466643232 Jan 24 12:15:44.712000 audit: BPF prog-id=214 op=LOAD Jan 24 12:15:44.712000 audit[4633]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4621 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:44.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536383136643265313163326134353561653964393165613466643232 Jan 24 12:15:44.715354 systemd-resolved[1286]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 12:15:44.728000 audit[4661]: NETFILTER_CFG table=filter:123 family=2 entries=15 op=nft_register_rule pid=4661 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:15:44.728000 audit[4661]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffca6ddc320 a2=0 a3=7ffca6ddc30c items=0 ppid=2961 pid=4661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:44.728000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:15:44.736000 audit[4661]: NETFILTER_CFG table=nat:124 family=2 entries=25 op=nft_register_chain pid=4661 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:15:44.736000 audit[4661]: SYSCALL arch=c000003e syscall=46 success=yes exit=8580 a0=3 a1=7ffca6ddc320 a2=0 a3=7ffca6ddc30c items=0 ppid=2961 pid=4661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:44.736000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:15:44.765899 containerd[1599]: time="2026-01-24T12:15:44.765732310Z" level=info msg="connecting to shim 0a97c1671b90c45a60270ce793a13879fc9613abc19ed1765378dff4173fc3e0" address="unix:///run/containerd/s/697347e6616f41d1c3306664b4dd6a49d3dbd71aeffc904fa4898724e644b81b" namespace=k8s.io protocol=ttrpc version=3 Jan 24 12:15:44.770892 containerd[1599]: time="2026-01-24T12:15:44.770782146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79bb998d4d-94tq7,Uid:e2057d14-bb68-40a3-9abf-385c845f08ca,Namespace:calico-system,Attempt:0,} returns sandbox id \"e6816d2e11c2a455ae9d91ea4fd22fcfe73942f04be199bc2b50bb36f0435739\"" Jan 24 12:15:44.775303 containerd[1599]: time="2026-01-24T12:15:44.775277086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 12:15:44.825415 systemd[1]: Started cri-containerd-0a97c1671b90c45a60270ce793a13879fc9613abc19ed1765378dff4173fc3e0.scope - libcontainer container 0a97c1671b90c45a60270ce793a13879fc9613abc19ed1765378dff4173fc3e0. Jan 24 12:15:44.840379 containerd[1599]: time="2026-01-24T12:15:44.840353276Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:15:44.842428 containerd[1599]: time="2026-01-24T12:15:44.842402030Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 12:15:44.842872 containerd[1599]: time="2026-01-24T12:15:44.842850593Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 24 12:15:44.843222 kubelet[2795]: E0124 12:15:44.843034 2795 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 12:15:44.843222 kubelet[2795]: E0124 12:15:44.843204 2795 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 12:15:44.843841 kubelet[2795]: E0124 12:15:44.843734 2795 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pg6sz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-79bb998d4d-94tq7_calico-system(e2057d14-bb68-40a3-9abf-385c845f08ca): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 12:15:44.843000 audit: BPF prog-id=215 op=LOAD Jan 24 12:15:44.848149 kernel: kauditd_printk_skb: 205 callbacks suppressed Jan 24 12:15:44.848206 kernel: audit: type=1334 audit(1769256944.843:636): prog-id=215 op=LOAD Jan 24 12:15:44.849314 kubelet[2795]: E0124 12:15:44.849284 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79bb998d4d-94tq7" podUID="e2057d14-bb68-40a3-9abf-385c845f08ca" Jan 24 12:15:44.845000 audit: BPF prog-id=216 op=LOAD Jan 24 12:15:44.852448 systemd-networkd[1516]: cali8f3ec2fe7da: Gained IPv6LL Jan 24 12:15:44.856576 kernel: audit: type=1334 audit(1769256944.845:637): prog-id=216 op=LOAD Jan 24 12:15:44.856857 systemd-resolved[1286]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 12:15:44.845000 audit[4687]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4676 pid=4687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:44.871999 kernel: audit: type=1300 audit(1769256944.845:637): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4676 pid=4687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:44.872051 kernel: audit: type=1327 audit(1769256944.845:637): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061393763313637316239306334356136303237306365373933613133 Jan 24 12:15:44.845000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061393763313637316239306334356136303237306365373933613133 Jan 24 12:15:44.845000 audit: BPF prog-id=216 op=UNLOAD Jan 24 12:15:44.890650 kernel: audit: type=1334 audit(1769256944.845:638): prog-id=216 op=UNLOAD Jan 24 12:15:44.845000 audit[4687]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4676 pid=4687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:44.905167 kernel: audit: type=1300 audit(1769256944.845:638): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4676 pid=4687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:44.845000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061393763313637316239306334356136303237306365373933613133 Jan 24 12:15:44.919637 kernel: audit: type=1327 audit(1769256944.845:638): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061393763313637316239306334356136303237306365373933613133 Jan 24 12:15:44.845000 audit: BPF prog-id=217 op=LOAD Jan 24 12:15:44.923581 kernel: audit: type=1334 audit(1769256944.845:639): prog-id=217 op=LOAD Jan 24 12:15:44.923753 kernel: audit: type=1300 audit(1769256944.845:639): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4676 pid=4687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:44.845000 audit[4687]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4676 pid=4687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:44.929651 containerd[1599]: time="2026-01-24T12:15:44.929470409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8479996d5-zttk4,Uid:f06e4a34-950d-4dc0-91e1-512b91c976bf,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0a97c1671b90c45a60270ce793a13879fc9613abc19ed1765378dff4173fc3e0\"" Jan 24 12:15:44.933738 containerd[1599]: time="2026-01-24T12:15:44.933593906Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 12:15:44.845000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061393763313637316239306334356136303237306365373933613133 Jan 24 12:15:44.958181 kernel: audit: type=1327 audit(1769256944.845:639): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061393763313637316239306334356136303237306365373933613133 Jan 24 12:15:44.845000 audit: BPF prog-id=218 op=LOAD Jan 24 12:15:44.845000 audit[4687]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4676 pid=4687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:44.845000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061393763313637316239306334356136303237306365373933613133 Jan 24 12:15:44.845000 audit: BPF prog-id=218 op=UNLOAD Jan 24 12:15:44.845000 audit[4687]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4676 pid=4687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:44.845000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061393763313637316239306334356136303237306365373933613133 Jan 24 12:15:44.845000 audit: BPF prog-id=217 op=UNLOAD Jan 24 12:15:44.845000 audit[4687]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4676 pid=4687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:44.845000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061393763313637316239306334356136303237306365373933613133 Jan 24 12:15:44.845000 audit: BPF prog-id=219 op=LOAD Jan 24 12:15:44.845000 audit[4687]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4676 pid=4687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:44.845000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061393763313637316239306334356136303237306365373933613133 Jan 24 12:15:44.998474 containerd[1599]: time="2026-01-24T12:15:44.997596091Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:15:45.000205 containerd[1599]: time="2026-01-24T12:15:45.000179503Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 12:15:45.000915 containerd[1599]: time="2026-01-24T12:15:45.000454827Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 12:15:45.001796 kubelet[2795]: E0124 12:15:45.001768 2795 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 12:15:45.002025 kubelet[2795]: E0124 12:15:45.002005 2795 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 12:15:45.003855 kubelet[2795]: E0124 12:15:45.003674 2795 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8s4r4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8479996d5-zttk4_calico-apiserver(f06e4a34-950d-4dc0-91e1-512b91c976bf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 12:15:45.005069 kubelet[2795]: E0124 12:15:45.005011 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8479996d5-zttk4" podUID="f06e4a34-950d-4dc0-91e1-512b91c976bf" Jan 24 12:15:45.188000 audit: BPF prog-id=220 op=LOAD Jan 24 12:15:45.188000 audit[4738]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff5acde7e0 a2=98 a3=1fffffffffffffff items=0 ppid=4713 pid=4738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.188000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 12:15:45.188000 audit: BPF prog-id=220 op=UNLOAD Jan 24 12:15:45.188000 audit[4738]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff5acde7b0 a3=0 items=0 ppid=4713 pid=4738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.188000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 12:15:45.189000 audit: BPF prog-id=221 op=LOAD Jan 24 12:15:45.189000 audit[4738]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff5acde6c0 a2=94 a3=3 items=0 ppid=4713 pid=4738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.189000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 12:15:45.190000 audit: BPF prog-id=221 op=UNLOAD Jan 24 12:15:45.190000 audit[4738]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff5acde6c0 a2=94 a3=3 items=0 ppid=4713 pid=4738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.190000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 12:15:45.190000 audit: BPF prog-id=222 op=LOAD Jan 24 12:15:45.190000 audit[4738]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff5acde700 a2=94 a3=7fff5acde8e0 items=0 ppid=4713 pid=4738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.190000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 12:15:45.190000 audit: BPF prog-id=222 op=UNLOAD Jan 24 12:15:45.190000 audit[4738]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff5acde700 a2=94 a3=7fff5acde8e0 items=0 ppid=4713 pid=4738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.190000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 12:15:45.194000 audit: BPF prog-id=223 op=LOAD Jan 24 12:15:45.194000 audit[4739]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff2adb0440 a2=98 a3=3 items=0 ppid=4713 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.194000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 12:15:45.195000 audit: BPF prog-id=223 op=UNLOAD Jan 24 12:15:45.195000 audit[4739]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff2adb0410 a3=0 items=0 ppid=4713 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.195000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 12:15:45.195000 audit: BPF prog-id=224 op=LOAD Jan 24 12:15:45.195000 audit[4739]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff2adb0230 a2=94 a3=54428f items=0 ppid=4713 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.195000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 12:15:45.195000 audit: BPF prog-id=224 op=UNLOAD Jan 24 12:15:45.195000 audit[4739]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff2adb0230 a2=94 a3=54428f items=0 ppid=4713 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.195000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 12:15:45.195000 audit: BPF prog-id=225 op=LOAD Jan 24 12:15:45.195000 audit[4739]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff2adb0260 a2=94 a3=2 items=0 ppid=4713 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.195000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 12:15:45.195000 audit: BPF prog-id=225 op=UNLOAD Jan 24 12:15:45.195000 audit[4739]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff2adb0260 a2=0 a3=2 items=0 ppid=4713 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.195000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 12:15:45.322030 containerd[1599]: time="2026-01-24T12:15:45.321697596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-xv8ss,Uid:51f294f0-54db-45aa-b128-8a4414560ade,Namespace:calico-system,Attempt:0,}" Jan 24 12:15:45.416000 audit: BPF prog-id=226 op=LOAD Jan 24 12:15:45.416000 audit[4739]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff2adb0120 a2=94 a3=1 items=0 ppid=4713 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.416000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 12:15:45.417000 audit: BPF prog-id=226 op=UNLOAD Jan 24 12:15:45.417000 audit[4739]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff2adb0120 a2=94 a3=1 items=0 ppid=4713 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.417000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 12:15:45.437000 audit: BPF prog-id=227 op=LOAD Jan 24 12:15:45.437000 audit[4739]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff2adb0110 a2=94 a3=4 items=0 ppid=4713 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.437000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 12:15:45.437000 audit: BPF prog-id=227 op=UNLOAD Jan 24 12:15:45.437000 audit[4739]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff2adb0110 a2=0 a3=4 items=0 ppid=4713 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.437000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 12:15:45.437000 audit: BPF prog-id=228 op=LOAD Jan 24 12:15:45.437000 audit[4739]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff2adaff70 a2=94 a3=5 items=0 ppid=4713 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.437000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 12:15:45.438000 audit: BPF prog-id=228 op=UNLOAD Jan 24 12:15:45.438000 audit[4739]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff2adaff70 a2=0 a3=5 items=0 ppid=4713 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.438000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 12:15:45.438000 audit: BPF prog-id=229 op=LOAD Jan 24 12:15:45.438000 audit[4739]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff2adb0190 a2=94 a3=6 items=0 ppid=4713 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.438000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 12:15:45.438000 audit: BPF prog-id=229 op=UNLOAD Jan 24 12:15:45.438000 audit[4739]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff2adb0190 a2=0 a3=6 items=0 ppid=4713 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.438000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 12:15:45.438000 audit: BPF prog-id=230 op=LOAD Jan 24 12:15:45.438000 audit[4739]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff2adaf940 a2=94 a3=88 items=0 ppid=4713 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.438000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 12:15:45.438000 audit: BPF prog-id=231 op=LOAD Jan 24 12:15:45.438000 audit[4739]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff2adaf7c0 a2=94 a3=2 items=0 ppid=4713 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.438000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 12:15:45.438000 audit: BPF prog-id=231 op=UNLOAD Jan 24 12:15:45.438000 audit[4739]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff2adaf7f0 a2=0 a3=7fff2adaf8f0 items=0 ppid=4713 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.438000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 12:15:45.439000 audit: BPF prog-id=230 op=UNLOAD Jan 24 12:15:45.439000 audit[4739]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=1f670d10 a2=0 a3=5d52ff520a3ba457 items=0 ppid=4713 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.439000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 12:15:45.481000 audit: BPF prog-id=232 op=LOAD Jan 24 12:15:45.481000 audit[4762]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcf8e26950 a2=98 a3=1999999999999999 items=0 ppid=4713 pid=4762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.481000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 24 12:15:45.481000 audit: BPF prog-id=232 op=UNLOAD Jan 24 12:15:45.481000 audit[4762]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffcf8e26920 a3=0 items=0 ppid=4713 pid=4762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.481000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 24 12:15:45.481000 audit: BPF prog-id=233 op=LOAD Jan 24 12:15:45.481000 audit[4762]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcf8e26830 a2=94 a3=ffff items=0 ppid=4713 pid=4762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.481000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 24 12:15:45.481000 audit: BPF prog-id=233 op=UNLOAD Jan 24 12:15:45.481000 audit[4762]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcf8e26830 a2=94 a3=ffff items=0 ppid=4713 pid=4762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.481000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 24 12:15:45.481000 audit: BPF prog-id=234 op=LOAD Jan 24 12:15:45.481000 audit[4762]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcf8e26870 a2=94 a3=7ffcf8e26a50 items=0 ppid=4713 pid=4762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.481000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 24 12:15:45.481000 audit: BPF prog-id=234 op=UNLOAD Jan 24 12:15:45.481000 audit[4762]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcf8e26870 a2=94 a3=7ffcf8e26a50 items=0 ppid=4713 pid=4762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.481000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 24 12:15:45.635030 kubelet[2795]: E0124 12:15:45.634282 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79bb998d4d-94tq7" podUID="e2057d14-bb68-40a3-9abf-385c845f08ca" Jan 24 12:15:45.642951 systemd-networkd[1516]: cali934429c1027: Link UP Jan 24 12:15:45.646367 systemd-networkd[1516]: cali934429c1027: Gained carrier Jan 24 12:15:45.649297 kubelet[2795]: E0124 12:15:45.648818 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8479996d5-hmtzg" podUID="1e987738-bc46-4583-b000-c8f1bfbb02a7" Jan 24 12:15:45.649297 kubelet[2795]: E0124 12:15:45.648878 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8479996d5-zttk4" podUID="f06e4a34-950d-4dc0-91e1-512b91c976bf" Jan 24 12:15:45.649764 kubelet[2795]: E0124 12:15:45.649643 2795 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:15:45.681697 kubelet[2795]: I0124 12:15:45.681668 2795 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 12:15:45.682652 kubelet[2795]: E0124 12:15:45.682635 2795 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:15:45.697523 containerd[1599]: 2026-01-24 12:15:45.434 [INFO][4741] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--xv8ss-eth0 goldmane-666569f655- calico-system 51f294f0-54db-45aa-b128-8a4414560ade 862 0 2026-01-24 12:15:20 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-xv8ss eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali934429c1027 [] [] }} ContainerID="24451d6df7103f9c1484fff0288d04b6f43858486cd8e4a9428cd9f2074b20ad" Namespace="calico-system" Pod="goldmane-666569f655-xv8ss" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--xv8ss-" Jan 24 12:15:45.697523 containerd[1599]: 2026-01-24 12:15:45.436 [INFO][4741] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="24451d6df7103f9c1484fff0288d04b6f43858486cd8e4a9428cd9f2074b20ad" Namespace="calico-system" Pod="goldmane-666569f655-xv8ss" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--xv8ss-eth0" Jan 24 12:15:45.697523 containerd[1599]: 2026-01-24 12:15:45.534 [INFO][4756] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="24451d6df7103f9c1484fff0288d04b6f43858486cd8e4a9428cd9f2074b20ad" HandleID="k8s-pod-network.24451d6df7103f9c1484fff0288d04b6f43858486cd8e4a9428cd9f2074b20ad" Workload="localhost-k8s-goldmane--666569f655--xv8ss-eth0" Jan 24 12:15:45.697523 containerd[1599]: 2026-01-24 12:15:45.535 [INFO][4756] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="24451d6df7103f9c1484fff0288d04b6f43858486cd8e4a9428cd9f2074b20ad" HandleID="k8s-pod-network.24451d6df7103f9c1484fff0288d04b6f43858486cd8e4a9428cd9f2074b20ad" Workload="localhost-k8s-goldmane--666569f655--xv8ss-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000346e90), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-xv8ss", "timestamp":"2026-01-24 12:15:45.534967606 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 12:15:45.697523 containerd[1599]: 2026-01-24 12:15:45.535 [INFO][4756] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 12:15:45.697523 containerd[1599]: 2026-01-24 12:15:45.535 [INFO][4756] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 12:15:45.697523 containerd[1599]: 2026-01-24 12:15:45.535 [INFO][4756] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 12:15:45.697523 containerd[1599]: 2026-01-24 12:15:45.557 [INFO][4756] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.24451d6df7103f9c1484fff0288d04b6f43858486cd8e4a9428cd9f2074b20ad" host="localhost" Jan 24 12:15:45.697523 containerd[1599]: 2026-01-24 12:15:45.567 [INFO][4756] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 12:15:45.697523 containerd[1599]: 2026-01-24 12:15:45.578 [INFO][4756] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 12:15:45.697523 containerd[1599]: 2026-01-24 12:15:45.585 [INFO][4756] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 12:15:45.697523 containerd[1599]: 2026-01-24 12:15:45.595 [INFO][4756] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 12:15:45.697523 containerd[1599]: 2026-01-24 12:15:45.595 [INFO][4756] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.24451d6df7103f9c1484fff0288d04b6f43858486cd8e4a9428cd9f2074b20ad" host="localhost" Jan 24 12:15:45.697523 containerd[1599]: 2026-01-24 12:15:45.600 [INFO][4756] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.24451d6df7103f9c1484fff0288d04b6f43858486cd8e4a9428cd9f2074b20ad Jan 24 12:15:45.697523 containerd[1599]: 2026-01-24 12:15:45.612 [INFO][4756] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.24451d6df7103f9c1484fff0288d04b6f43858486cd8e4a9428cd9f2074b20ad" host="localhost" Jan 24 12:15:45.697523 containerd[1599]: 2026-01-24 12:15:45.624 [INFO][4756] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.24451d6df7103f9c1484fff0288d04b6f43858486cd8e4a9428cd9f2074b20ad" host="localhost" Jan 24 12:15:45.697523 containerd[1599]: 2026-01-24 12:15:45.624 [INFO][4756] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.24451d6df7103f9c1484fff0288d04b6f43858486cd8e4a9428cd9f2074b20ad" host="localhost" Jan 24 12:15:45.697523 containerd[1599]: 2026-01-24 12:15:45.624 [INFO][4756] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 12:15:45.697523 containerd[1599]: 2026-01-24 12:15:45.624 [INFO][4756] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="24451d6df7103f9c1484fff0288d04b6f43858486cd8e4a9428cd9f2074b20ad" HandleID="k8s-pod-network.24451d6df7103f9c1484fff0288d04b6f43858486cd8e4a9428cd9f2074b20ad" Workload="localhost-k8s-goldmane--666569f655--xv8ss-eth0" Jan 24 12:15:45.698314 containerd[1599]: 2026-01-24 12:15:45.631 [INFO][4741] cni-plugin/k8s.go 418: Populated endpoint ContainerID="24451d6df7103f9c1484fff0288d04b6f43858486cd8e4a9428cd9f2074b20ad" Namespace="calico-system" Pod="goldmane-666569f655-xv8ss" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--xv8ss-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--xv8ss-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"51f294f0-54db-45aa-b128-8a4414560ade", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 12, 15, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-xv8ss", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali934429c1027", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 12:15:45.698314 containerd[1599]: 2026-01-24 12:15:45.633 [INFO][4741] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="24451d6df7103f9c1484fff0288d04b6f43858486cd8e4a9428cd9f2074b20ad" Namespace="calico-system" Pod="goldmane-666569f655-xv8ss" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--xv8ss-eth0" Jan 24 12:15:45.698314 containerd[1599]: 2026-01-24 12:15:45.633 [INFO][4741] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali934429c1027 ContainerID="24451d6df7103f9c1484fff0288d04b6f43858486cd8e4a9428cd9f2074b20ad" Namespace="calico-system" Pod="goldmane-666569f655-xv8ss" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--xv8ss-eth0" Jan 24 12:15:45.698314 containerd[1599]: 2026-01-24 12:15:45.648 [INFO][4741] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="24451d6df7103f9c1484fff0288d04b6f43858486cd8e4a9428cd9f2074b20ad" Namespace="calico-system" Pod="goldmane-666569f655-xv8ss" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--xv8ss-eth0" Jan 24 12:15:45.698314 containerd[1599]: 2026-01-24 12:15:45.651 [INFO][4741] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="24451d6df7103f9c1484fff0288d04b6f43858486cd8e4a9428cd9f2074b20ad" Namespace="calico-system" Pod="goldmane-666569f655-xv8ss" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--xv8ss-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--xv8ss-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"51f294f0-54db-45aa-b128-8a4414560ade", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 12, 15, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"24451d6df7103f9c1484fff0288d04b6f43858486cd8e4a9428cd9f2074b20ad", Pod:"goldmane-666569f655-xv8ss", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali934429c1027", MAC:"92:b3:84:4d:e8:03", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 12:15:45.698314 containerd[1599]: 2026-01-24 12:15:45.686 [INFO][4741] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="24451d6df7103f9c1484fff0288d04b6f43858486cd8e4a9428cd9f2074b20ad" Namespace="calico-system" Pod="goldmane-666569f655-xv8ss" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--xv8ss-eth0" Jan 24 12:15:45.734907 systemd-networkd[1516]: vxlan.calico: Link UP Jan 24 12:15:45.734926 systemd-networkd[1516]: vxlan.calico: Gained carrier Jan 24 12:15:45.821000 audit[4792]: NETFILTER_CFG table=filter:125 family=2 entries=14 op=nft_register_rule pid=4792 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:15:45.821000 audit[4792]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff328121e0 a2=0 a3=7fff328121cc items=0 ppid=2961 pid=4792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.821000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:15:45.829000 audit[4792]: NETFILTER_CFG table=nat:126 family=2 entries=20 op=nft_register_rule pid=4792 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:15:45.829000 audit[4792]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff328121e0 a2=0 a3=7fff328121cc items=0 ppid=2961 pid=4792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.829000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:15:45.878000 audit: BPF prog-id=235 op=LOAD Jan 24 12:15:45.878000 audit[4833]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcd778e100 a2=98 a3=0 items=0 ppid=4713 pid=4833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.878000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 12:15:45.879000 audit: BPF prog-id=235 op=UNLOAD Jan 24 12:15:45.879000 audit[4833]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffcd778e0d0 a3=0 items=0 ppid=4713 pid=4833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.879000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 12:15:45.879000 audit: BPF prog-id=236 op=LOAD Jan 24 12:15:45.879000 audit[4833]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcd778df10 a2=94 a3=54428f items=0 ppid=4713 pid=4833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.879000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 12:15:45.879000 audit: BPF prog-id=236 op=UNLOAD Jan 24 12:15:45.879000 audit[4833]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcd778df10 a2=94 a3=54428f items=0 ppid=4713 pid=4833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.879000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 12:15:45.879000 audit: BPF prog-id=237 op=LOAD Jan 24 12:15:45.879000 audit[4833]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcd778df40 a2=94 a3=2 items=0 ppid=4713 pid=4833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.879000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 12:15:45.879000 audit: BPF prog-id=237 op=UNLOAD Jan 24 12:15:45.879000 audit[4833]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcd778df40 a2=0 a3=2 items=0 ppid=4713 pid=4833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.879000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 12:15:45.879000 audit: BPF prog-id=238 op=LOAD Jan 24 12:15:45.879000 audit[4833]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcd778dcf0 a2=94 a3=4 items=0 ppid=4713 pid=4833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.879000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 12:15:45.879000 audit: BPF prog-id=238 op=UNLOAD Jan 24 12:15:45.879000 audit[4833]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffcd778dcf0 a2=94 a3=4 items=0 ppid=4713 pid=4833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.879000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 12:15:45.879000 audit: BPF prog-id=239 op=LOAD Jan 24 12:15:45.879000 audit[4833]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcd778ddf0 a2=94 a3=7ffcd778df70 items=0 ppid=4713 pid=4833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.879000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 12:15:45.879000 audit: BPF prog-id=239 op=UNLOAD Jan 24 12:15:45.879000 audit[4833]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffcd778ddf0 a2=0 a3=7ffcd778df70 items=0 ppid=4713 pid=4833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.879000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 12:15:45.880000 audit: BPF prog-id=240 op=LOAD Jan 24 12:15:45.880000 audit[4833]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcd778d520 a2=94 a3=2 items=0 ppid=4713 pid=4833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.880000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 12:15:45.880000 audit: BPF prog-id=240 op=UNLOAD Jan 24 12:15:45.880000 audit[4833]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffcd778d520 a2=0 a3=2 items=0 ppid=4713 pid=4833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.880000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 12:15:45.880000 audit: BPF prog-id=241 op=LOAD Jan 24 12:15:45.880000 audit[4833]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcd778d620 a2=94 a3=30 items=0 ppid=4713 pid=4833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.880000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 12:15:45.898891 containerd[1599]: time="2026-01-24T12:15:45.871734360Z" level=info msg="connecting to shim 24451d6df7103f9c1484fff0288d04b6f43858486cd8e4a9428cd9f2074b20ad" address="unix:///run/containerd/s/e702c6f7282cfa1649d96094b14748e8208f2daf19e4201b84434220f8894b71" namespace=k8s.io protocol=ttrpc version=3 Jan 24 12:15:45.914000 audit: BPF prog-id=242 op=LOAD Jan 24 12:15:45.914000 audit[4851]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff05d84fa0 a2=98 a3=0 items=0 ppid=4713 pid=4851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.914000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 12:15:45.917000 audit: BPF prog-id=242 op=UNLOAD Jan 24 12:15:45.917000 audit[4851]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff05d84f70 a3=0 items=0 ppid=4713 pid=4851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.917000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 12:15:45.917000 audit: BPF prog-id=243 op=LOAD Jan 24 12:15:45.917000 audit[4851]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff05d84d90 a2=94 a3=54428f items=0 ppid=4713 pid=4851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.917000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 12:15:45.918000 audit: BPF prog-id=243 op=UNLOAD Jan 24 12:15:45.918000 audit[4851]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff05d84d90 a2=94 a3=54428f items=0 ppid=4713 pid=4851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.918000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 12:15:45.918000 audit: BPF prog-id=244 op=LOAD Jan 24 12:15:45.918000 audit[4851]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff05d84dc0 a2=94 a3=2 items=0 ppid=4713 pid=4851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.918000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 12:15:45.918000 audit: BPF prog-id=244 op=UNLOAD Jan 24 12:15:45.918000 audit[4851]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff05d84dc0 a2=0 a3=2 items=0 ppid=4713 pid=4851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:45.918000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 12:15:45.942885 systemd-networkd[1516]: cali181735ed54d: Gained IPv6LL Jan 24 12:15:45.947388 systemd[1]: Started cri-containerd-24451d6df7103f9c1484fff0288d04b6f43858486cd8e4a9428cd9f2074b20ad.scope - libcontainer container 24451d6df7103f9c1484fff0288d04b6f43858486cd8e4a9428cd9f2074b20ad. Jan 24 12:15:46.000000 audit: BPF prog-id=245 op=LOAD Jan 24 12:15:46.001000 audit: BPF prog-id=246 op=LOAD Jan 24 12:15:46.001000 audit[4850]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4812 pid=4850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:46.001000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234343531643664663731303366396331343834666666303238386430 Jan 24 12:15:46.005000 audit: BPF prog-id=246 op=UNLOAD Jan 24 12:15:46.005000 audit[4850]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4812 pid=4850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:46.005000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234343531643664663731303366396331343834666666303238386430 Jan 24 12:15:46.006000 audit: BPF prog-id=247 op=LOAD Jan 24 12:15:46.006000 audit[4850]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4812 pid=4850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:46.006000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234343531643664663731303366396331343834666666303238386430 Jan 24 12:15:46.007000 audit: BPF prog-id=248 op=LOAD Jan 24 12:15:46.007000 audit[4850]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4812 pid=4850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:46.007000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234343531643664663731303366396331343834666666303238386430 Jan 24 12:15:46.008000 audit: BPF prog-id=248 op=UNLOAD Jan 24 12:15:46.008000 audit[4850]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4812 pid=4850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:46.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234343531643664663731303366396331343834666666303238386430 Jan 24 12:15:46.008000 audit: BPF prog-id=247 op=UNLOAD Jan 24 12:15:46.008000 audit[4850]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4812 pid=4850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:46.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234343531643664663731303366396331343834666666303238386430 Jan 24 12:15:46.008000 audit: BPF prog-id=249 op=LOAD Jan 24 12:15:46.008000 audit[4850]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4812 pid=4850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:46.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234343531643664663731303366396331343834666666303238386430 Jan 24 12:15:46.015684 systemd-resolved[1286]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 12:15:46.112398 containerd[1599]: time="2026-01-24T12:15:46.112364110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-xv8ss,Uid:51f294f0-54db-45aa-b128-8a4414560ade,Namespace:calico-system,Attempt:0,} returns sandbox id \"24451d6df7103f9c1484fff0288d04b6f43858486cd8e4a9428cd9f2074b20ad\"" Jan 24 12:15:46.118010 containerd[1599]: time="2026-01-24T12:15:46.117858597Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 12:15:46.179801 containerd[1599]: time="2026-01-24T12:15:46.179489150Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:15:46.181222 containerd[1599]: time="2026-01-24T12:15:46.181169687Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 24 12:15:46.181222 containerd[1599]: time="2026-01-24T12:15:46.181208410Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 12:15:46.182834 kubelet[2795]: E0124 12:15:46.181427 2795 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 12:15:46.182834 kubelet[2795]: E0124 12:15:46.181461 2795 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 12:15:46.182834 kubelet[2795]: E0124 12:15:46.181623 2795 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pdwl8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-xv8ss_calico-system(51f294f0-54db-45aa-b128-8a4414560ade): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 12:15:46.184025 kubelet[2795]: E0124 12:15:46.183400 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xv8ss" podUID="51f294f0-54db-45aa-b128-8a4414560ade" Jan 24 12:15:46.263000 audit: BPF prog-id=250 op=LOAD Jan 24 12:15:46.263000 audit[4851]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff05d84c80 a2=94 a3=1 items=0 ppid=4713 pid=4851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:46.263000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 12:15:46.267000 audit: BPF prog-id=250 op=UNLOAD Jan 24 12:15:46.267000 audit[4851]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff05d84c80 a2=94 a3=1 items=0 ppid=4713 pid=4851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:46.267000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 12:15:46.278000 audit: BPF prog-id=251 op=LOAD Jan 24 12:15:46.278000 audit[4851]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff05d84c70 a2=94 a3=4 items=0 ppid=4713 pid=4851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:46.278000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 12:15:46.278000 audit: BPF prog-id=251 op=UNLOAD Jan 24 12:15:46.278000 audit[4851]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff05d84c70 a2=0 a3=4 items=0 ppid=4713 pid=4851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:46.278000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 12:15:46.278000 audit: BPF prog-id=252 op=LOAD Jan 24 12:15:46.278000 audit[4851]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff05d84ad0 a2=94 a3=5 items=0 ppid=4713 pid=4851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:46.278000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 12:15:46.278000 audit: BPF prog-id=252 op=UNLOAD Jan 24 12:15:46.278000 audit[4851]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff05d84ad0 a2=0 a3=5 items=0 ppid=4713 pid=4851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:46.278000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 12:15:46.278000 audit: BPF prog-id=253 op=LOAD Jan 24 12:15:46.278000 audit[4851]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff05d84cf0 a2=94 a3=6 items=0 ppid=4713 pid=4851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:46.278000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 12:15:46.278000 audit: BPF prog-id=253 op=UNLOAD Jan 24 12:15:46.278000 audit[4851]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff05d84cf0 a2=0 a3=6 items=0 ppid=4713 pid=4851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:46.278000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 12:15:46.279000 audit: BPF prog-id=254 op=LOAD Jan 24 12:15:46.279000 audit[4851]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff05d844a0 a2=94 a3=88 items=0 ppid=4713 pid=4851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:46.279000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 12:15:46.279000 audit: BPF prog-id=255 op=LOAD Jan 24 12:15:46.279000 audit[4851]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff05d84320 a2=94 a3=2 items=0 ppid=4713 pid=4851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:46.279000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 12:15:46.279000 audit: BPF prog-id=255 op=UNLOAD Jan 24 12:15:46.279000 audit[4851]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff05d84350 a2=0 a3=7fff05d84450 items=0 ppid=4713 pid=4851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:46.279000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 12:15:46.280000 audit: BPF prog-id=254 op=UNLOAD Jan 24 12:15:46.280000 audit[4851]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=1c0eed10 a2=0 a3=3cd2613b2e4513a5 items=0 ppid=4713 pid=4851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:46.280000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 12:15:46.293000 audit: BPF prog-id=241 op=UNLOAD Jan 24 12:15:46.293000 audit[4713]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000bf6100 a2=0 a3=0 items=0 ppid=3993 pid=4713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:46.293000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 24 12:15:46.463000 audit[4935]: NETFILTER_CFG table=mangle:127 family=2 entries=16 op=nft_register_chain pid=4935 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 12:15:46.463000 audit[4935]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffdfb227310 a2=0 a3=7ffdfb2272fc items=0 ppid=4713 pid=4935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:46.463000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 12:15:46.467000 audit[4938]: NETFILTER_CFG table=nat:128 family=2 entries=15 op=nft_register_chain pid=4938 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 12:15:46.467000 audit[4938]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7fffa5025690 a2=0 a3=7fffa502567c items=0 ppid=4713 pid=4938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:46.467000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 12:15:46.478000 audit[4934]: NETFILTER_CFG table=raw:129 family=2 entries=21 op=nft_register_chain pid=4934 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 12:15:46.478000 audit[4934]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffcfe5cd080 a2=0 a3=7ffcfe5cd06c items=0 ppid=4713 pid=4934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:46.478000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 12:15:46.490000 audit[4937]: NETFILTER_CFG table=filter:130 family=2 entries=285 op=nft_register_chain pid=4937 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 12:15:46.490000 audit[4937]: SYSCALL arch=c000003e syscall=46 success=yes exit=168336 a0=3 a1=7ffe5eccda00 a2=0 a3=7ffe5eccd9ec items=0 ppid=4713 pid=4937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:46.490000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 12:15:46.555000 audit[4948]: NETFILTER_CFG table=filter:131 family=2 entries=64 op=nft_register_chain pid=4948 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 12:15:46.555000 audit[4948]: SYSCALL arch=c000003e syscall=46 success=yes exit=31104 a0=3 a1=7fff3543b950 a2=0 a3=7fff3543b93c items=0 ppid=4713 pid=4948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:46.555000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 12:15:46.581425 systemd-networkd[1516]: cali781417e0da8: Gained IPv6LL Jan 24 12:15:46.651429 kubelet[2795]: E0124 12:15:46.651228 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xv8ss" podUID="51f294f0-54db-45aa-b128-8a4414560ade" Jan 24 12:15:46.652652 kubelet[2795]: E0124 12:15:46.651691 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79bb998d4d-94tq7" podUID="e2057d14-bb68-40a3-9abf-385c845f08ca" Jan 24 12:15:46.652764 kubelet[2795]: E0124 12:15:46.652736 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8479996d5-zttk4" podUID="f06e4a34-950d-4dc0-91e1-512b91c976bf" Jan 24 12:15:46.725000 audit[4951]: NETFILTER_CFG table=filter:132 family=2 entries=14 op=nft_register_rule pid=4951 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:15:46.725000 audit[4951]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe7c452aa0 a2=0 a3=7ffe7c452a8c items=0 ppid=2961 pid=4951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:46.725000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:15:46.734000 audit[4951]: NETFILTER_CFG table=nat:133 family=2 entries=20 op=nft_register_rule pid=4951 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:15:46.734000 audit[4951]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe7c452aa0 a2=0 a3=7ffe7c452a8c items=0 ppid=2961 pid=4951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:46.734000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:15:47.220486 systemd-networkd[1516]: cali934429c1027: Gained IPv6LL Jan 24 12:15:47.604528 systemd-networkd[1516]: vxlan.calico: Gained IPv6LL Jan 24 12:15:47.653864 kubelet[2795]: E0124 12:15:47.653775 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xv8ss" podUID="51f294f0-54db-45aa-b128-8a4414560ade" Jan 24 12:15:49.707765 systemd[1]: Started sshd@9-10.0.0.151:22-10.0.0.1:35526.service - OpenSSH per-connection server daemon (10.0.0.1:35526). Jan 24 12:15:49.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.151:22-10.0.0.1:35526 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:15:49.830000 audit[4956]: USER_ACCT pid=4956 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:15:49.831956 sshd[4956]: Accepted publickey for core from 10.0.0.1 port 35526 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 12:15:49.831000 audit[4956]: CRED_ACQ pid=4956 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:15:49.831000 audit[4956]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcfc533fd0 a2=3 a3=0 items=0 ppid=1 pid=4956 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:49.831000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:15:49.834310 sshd-session[4956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 12:15:49.841655 systemd-logind[1577]: New session 11 of user core. Jan 24 12:15:49.850376 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 24 12:15:49.852000 audit[4956]: USER_START pid=4956 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:15:49.857658 kernel: kauditd_printk_skb: 253 callbacks suppressed Jan 24 12:15:49.857713 kernel: audit: type=1105 audit(1769256949.852:727): pid=4956 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:15:49.889823 kernel: audit: type=1103 audit(1769256949.855:728): pid=4961 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:15:49.855000 audit[4961]: CRED_ACQ pid=4961 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:15:49.976316 sshd[4961]: Connection closed by 10.0.0.1 port 35526 Jan 24 12:15:49.976633 sshd-session[4956]: pam_unix(sshd:session): session closed for user core Jan 24 12:15:49.977000 audit[4956]: USER_END pid=4956 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:15:49.982400 systemd[1]: sshd@9-10.0.0.151:22-10.0.0.1:35526.service: Deactivated successfully. Jan 24 12:15:49.985485 systemd[1]: session-11.scope: Deactivated successfully. Jan 24 12:15:49.986816 systemd-logind[1577]: Session 11 logged out. Waiting for processes to exit. Jan 24 12:15:49.988845 systemd-logind[1577]: Removed session 11. Jan 24 12:15:49.977000 audit[4956]: CRED_DISP pid=4956 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:15:50.007452 kernel: audit: type=1106 audit(1769256949.977:729): pid=4956 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:15:50.007543 kernel: audit: type=1104 audit(1769256949.977:730): pid=4956 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:15:50.007639 kernel: audit: type=1131 audit(1769256949.977:731): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.151:22-10.0.0.1:35526 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:15:49.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.151:22-10.0.0.1:35526 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:15:54.994876 systemd[1]: Started sshd@10-10.0.0.151:22-10.0.0.1:39930.service - OpenSSH per-connection server daemon (10.0.0.1:39930). Jan 24 12:15:54.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.151:22-10.0.0.1:39930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:15:55.016412 kernel: audit: type=1130 audit(1769256954.993:732): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.151:22-10.0.0.1:39930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:15:55.077000 audit[4988]: USER_ACCT pid=4988 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:15:55.080297 sshd[4988]: Accepted publickey for core from 10.0.0.1 port 39930 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 12:15:55.082384 sshd-session[4988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 12:15:55.089198 systemd-logind[1577]: New session 12 of user core. Jan 24 12:15:55.079000 audit[4988]: CRED_ACQ pid=4988 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:15:55.125216 kernel: audit: type=1101 audit(1769256955.077:733): pid=4988 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:15:55.125269 kernel: audit: type=1103 audit(1769256955.079:734): pid=4988 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:15:55.125285 kernel: audit: type=1006 audit(1769256955.080:735): pid=4988 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 24 12:15:55.080000 audit[4988]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffb4309de0 a2=3 a3=0 items=0 ppid=1 pid=4988 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:55.154068 kernel: audit: type=1300 audit(1769256955.080:735): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffb4309de0 a2=3 a3=0 items=0 ppid=1 pid=4988 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:15:55.154188 kernel: audit: type=1327 audit(1769256955.080:735): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:15:55.080000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:15:55.165653 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 24 12:15:55.168000 audit[4988]: USER_START pid=4988 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:15:55.168000 audit[4992]: CRED_ACQ pid=4992 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:15:55.200486 kernel: audit: type=1105 audit(1769256955.168:736): pid=4988 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:15:55.200524 kernel: audit: type=1103 audit(1769256955.168:737): pid=4992 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:15:55.292466 sshd[4992]: Connection closed by 10.0.0.1 port 39930 Jan 24 12:15:55.292757 sshd-session[4988]: pam_unix(sshd:session): session closed for user core Jan 24 12:15:55.293000 audit[4988]: USER_END pid=4988 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:15:55.299333 systemd-logind[1577]: Session 12 logged out. Waiting for processes to exit. Jan 24 12:15:55.299546 systemd[1]: sshd@10-10.0.0.151:22-10.0.0.1:39930.service: Deactivated successfully. Jan 24 12:15:55.302733 systemd[1]: session-12.scope: Deactivated successfully. Jan 24 12:15:55.305554 systemd-logind[1577]: Removed session 12. Jan 24 12:15:55.293000 audit[4988]: CRED_DISP pid=4988 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:15:55.326070 kernel: audit: type=1106 audit(1769256955.293:738): pid=4988 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:15:55.326231 kernel: audit: type=1104 audit(1769256955.293:739): pid=4988 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:15:55.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.151:22-10.0.0.1:39930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:15:57.324302 containerd[1599]: time="2026-01-24T12:15:57.324229935Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 12:15:57.389224 containerd[1599]: time="2026-01-24T12:15:57.388830179Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:15:57.390536 containerd[1599]: time="2026-01-24T12:15:57.390433577Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 12:15:57.390536 containerd[1599]: time="2026-01-24T12:15:57.390530388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 24 12:15:57.390975 kubelet[2795]: E0124 12:15:57.390844 2795 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 12:15:57.390975 kubelet[2795]: E0124 12:15:57.390927 2795 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 12:15:57.391495 containerd[1599]: time="2026-01-24T12:15:57.391211309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 12:15:57.392568 kubelet[2795]: E0124 12:15:57.392302 2795 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pg6sz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-79bb998d4d-94tq7_calico-system(e2057d14-bb68-40a3-9abf-385c845f08ca): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 12:15:57.393905 kubelet[2795]: E0124 12:15:57.393866 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79bb998d4d-94tq7" podUID="e2057d14-bb68-40a3-9abf-385c845f08ca" Jan 24 12:15:57.464458 containerd[1599]: time="2026-01-24T12:15:57.464394496Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:15:57.466057 containerd[1599]: time="2026-01-24T12:15:57.465990585Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 12:15:57.466057 containerd[1599]: time="2026-01-24T12:15:57.466044957Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 24 12:15:57.466512 kubelet[2795]: E0124 12:15:57.466419 2795 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 12:15:57.466562 kubelet[2795]: E0124 12:15:57.466513 2795 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 12:15:57.467033 kubelet[2795]: E0124 12:15:57.466821 2795 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-brds4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vfhsw_calico-system(d101ff7c-9560-44ae-a339-4a5dc1053aeb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 12:15:57.467372 containerd[1599]: time="2026-01-24T12:15:57.466872877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 12:15:57.531692 containerd[1599]: time="2026-01-24T12:15:57.531487846Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:15:57.533374 containerd[1599]: time="2026-01-24T12:15:57.533231911Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 12:15:57.533374 containerd[1599]: time="2026-01-24T12:15:57.533331447Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 12:15:57.533565 kubelet[2795]: E0124 12:15:57.533483 2795 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 12:15:57.533565 kubelet[2795]: E0124 12:15:57.533551 2795 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 12:15:57.533944 containerd[1599]: time="2026-01-24T12:15:57.533866406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 12:15:57.533992 kubelet[2795]: E0124 12:15:57.533946 2795 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8s4r4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8479996d5-zttk4_calico-apiserver(f06e4a34-950d-4dc0-91e1-512b91c976bf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 12:15:57.535353 kubelet[2795]: E0124 12:15:57.535251 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8479996d5-zttk4" podUID="f06e4a34-950d-4dc0-91e1-512b91c976bf" Jan 24 12:15:57.592907 containerd[1599]: time="2026-01-24T12:15:57.592515705Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:15:57.595018 containerd[1599]: time="2026-01-24T12:15:57.594888932Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 12:15:57.595018 containerd[1599]: time="2026-01-24T12:15:57.594932126Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 24 12:15:57.595242 kubelet[2795]: E0124 12:15:57.595158 2795 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 12:15:57.595242 kubelet[2795]: E0124 12:15:57.595185 2795 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 12:15:57.595299 kubelet[2795]: E0124 12:15:57.595254 2795 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-brds4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vfhsw_calico-system(d101ff7c-9560-44ae-a339-4a5dc1053aeb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 12:15:57.596865 kubelet[2795]: E0124 12:15:57.596718 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vfhsw" podUID="d101ff7c-9560-44ae-a339-4a5dc1053aeb" Jan 24 12:15:58.322547 containerd[1599]: time="2026-01-24T12:15:58.322441244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 12:15:58.385821 containerd[1599]: time="2026-01-24T12:15:58.385537296Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:15:58.388201 containerd[1599]: time="2026-01-24T12:15:58.387964027Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 12:15:58.388201 containerd[1599]: time="2026-01-24T12:15:58.388065256Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 24 12:15:58.388505 kubelet[2795]: E0124 12:15:58.388423 2795 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 12:15:58.388713 kubelet[2795]: E0124 12:15:58.388693 2795 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 12:15:58.389226 kubelet[2795]: E0124 12:15:58.388954 2795 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:470b03a2107d413e8e61da949363329c,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gr58p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7c54756445-jqv9f_calico-system(e91bb632-3654-4ec6-9f05-a5289f173ff2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 12:15:58.392874 containerd[1599]: time="2026-01-24T12:15:58.392728019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 12:15:58.460958 containerd[1599]: time="2026-01-24T12:15:58.460684465Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:15:58.463516 containerd[1599]: time="2026-01-24T12:15:58.463346233Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 12:15:58.463516 containerd[1599]: time="2026-01-24T12:15:58.463408918Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 12:15:58.463950 kubelet[2795]: E0124 12:15:58.463822 2795 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 12:15:58.463950 kubelet[2795]: E0124 12:15:58.463927 2795 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 12:15:58.464421 kubelet[2795]: E0124 12:15:58.464264 2795 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9sfpf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8479996d5-hmtzg_calico-apiserver(1e987738-bc46-4583-b000-c8f1bfbb02a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 12:15:58.465237 containerd[1599]: time="2026-01-24T12:15:58.464721367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 12:15:58.465655 kubelet[2795]: E0124 12:15:58.465364 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8479996d5-hmtzg" podUID="1e987738-bc46-4583-b000-c8f1bfbb02a7" Jan 24 12:15:58.529674 containerd[1599]: time="2026-01-24T12:15:58.529482543Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:15:58.531806 containerd[1599]: time="2026-01-24T12:15:58.531466323Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 12:15:58.531806 containerd[1599]: time="2026-01-24T12:15:58.531541994Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 24 12:15:58.532031 kubelet[2795]: E0124 12:15:58.531905 2795 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 12:15:58.532031 kubelet[2795]: E0124 12:15:58.531996 2795 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 12:15:58.532250 kubelet[2795]: E0124 12:15:58.532209 2795 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gr58p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7c54756445-jqv9f_calico-system(e91bb632-3654-4ec6-9f05-a5289f173ff2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 12:15:58.533833 kubelet[2795]: E0124 12:15:58.533714 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c54756445-jqv9f" podUID="e91bb632-3654-4ec6-9f05-a5289f173ff2" Jan 24 12:16:00.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.151:22-10.0.0.1:39934 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:16:00.306428 systemd[1]: Started sshd@11-10.0.0.151:22-10.0.0.1:39934.service - OpenSSH per-connection server daemon (10.0.0.1:39934). Jan 24 12:16:00.309693 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 12:16:00.309750 kernel: audit: type=1130 audit(1769256960.305:741): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.151:22-10.0.0.1:39934 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:16:00.322362 containerd[1599]: time="2026-01-24T12:16:00.322330881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 12:16:00.397000 audit[5014]: USER_ACCT pid=5014 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:00.399027 sshd[5014]: Accepted publickey for core from 10.0.0.1 port 39934 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 12:16:00.401973 sshd-session[5014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 12:16:00.408939 systemd-logind[1577]: New session 13 of user core. Jan 24 12:16:00.410422 containerd[1599]: time="2026-01-24T12:16:00.410189857Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:16:00.412069 containerd[1599]: time="2026-01-24T12:16:00.411917294Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 12:16:00.412069 containerd[1599]: time="2026-01-24T12:16:00.411995009Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 24 12:16:00.412461 kubelet[2795]: E0124 12:16:00.412308 2795 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 12:16:00.412461 kubelet[2795]: E0124 12:16:00.412391 2795 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 12:16:00.413896 kubelet[2795]: E0124 12:16:00.412499 2795 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pdwl8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-xv8ss_calico-system(51f294f0-54db-45aa-b128-8a4414560ade): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 12:16:00.399000 audit[5014]: CRED_ACQ pid=5014 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:00.414485 kubelet[2795]: E0124 12:16:00.414242 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xv8ss" podUID="51f294f0-54db-45aa-b128-8a4414560ade" Jan 24 12:16:00.433309 kernel: audit: type=1101 audit(1769256960.397:742): pid=5014 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:00.433425 kernel: audit: type=1103 audit(1769256960.399:743): pid=5014 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:00.433450 kernel: audit: type=1006 audit(1769256960.399:744): pid=5014 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 24 12:16:00.399000 audit[5014]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffc7ef2130 a2=3 a3=0 items=0 ppid=1 pid=5014 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:16:00.459226 kernel: audit: type=1300 audit(1769256960.399:744): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffc7ef2130 a2=3 a3=0 items=0 ppid=1 pid=5014 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:16:00.459281 kernel: audit: type=1327 audit(1769256960.399:744): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:16:00.399000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:16:00.460681 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 24 12:16:00.463000 audit[5014]: USER_START pid=5014 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:00.487747 kernel: audit: type=1105 audit(1769256960.463:745): pid=5014 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:00.487801 kernel: audit: type=1103 audit(1769256960.465:746): pid=5018 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:00.465000 audit[5018]: CRED_ACQ pid=5018 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:00.571901 sshd[5018]: Connection closed by 10.0.0.1 port 39934 Jan 24 12:16:00.572533 sshd-session[5014]: pam_unix(sshd:session): session closed for user core Jan 24 12:16:00.574000 audit[5014]: USER_END pid=5014 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:00.574000 audit[5014]: CRED_DISP pid=5014 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:00.607201 kernel: audit: type=1106 audit(1769256960.574:747): pid=5014 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:00.607256 kernel: audit: type=1104 audit(1769256960.574:748): pid=5014 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:00.611271 systemd[1]: sshd@11-10.0.0.151:22-10.0.0.1:39934.service: Deactivated successfully. Jan 24 12:16:00.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.151:22-10.0.0.1:39934 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:16:00.613401 systemd[1]: session-13.scope: Deactivated successfully. Jan 24 12:16:00.615007 systemd-logind[1577]: Session 13 logged out. Waiting for processes to exit. Jan 24 12:16:00.618775 systemd[1]: Started sshd@12-10.0.0.151:22-10.0.0.1:39946.service - OpenSSH per-connection server daemon (10.0.0.1:39946). Jan 24 12:16:00.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.151:22-10.0.0.1:39946 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:16:00.619634 systemd-logind[1577]: Removed session 13. Jan 24 12:16:00.692000 audit[5032]: USER_ACCT pid=5032 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:00.693843 sshd[5032]: Accepted publickey for core from 10.0.0.1 port 39946 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 12:16:00.693000 audit[5032]: CRED_ACQ pid=5032 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:00.693000 audit[5032]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc6e05be80 a2=3 a3=0 items=0 ppid=1 pid=5032 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:16:00.693000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:16:00.696250 sshd-session[5032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 12:16:00.703484 systemd-logind[1577]: New session 14 of user core. Jan 24 12:16:00.713375 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 24 12:16:00.716000 audit[5032]: USER_START pid=5032 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:00.718000 audit[5036]: CRED_ACQ pid=5036 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:00.848304 sshd[5036]: Connection closed by 10.0.0.1 port 39946 Jan 24 12:16:00.849914 sshd-session[5032]: pam_unix(sshd:session): session closed for user core Jan 24 12:16:00.853000 audit[5032]: USER_END pid=5032 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:00.853000 audit[5032]: CRED_DISP pid=5032 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:00.860507 systemd[1]: sshd@12-10.0.0.151:22-10.0.0.1:39946.service: Deactivated successfully. Jan 24 12:16:00.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.151:22-10.0.0.1:39946 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:16:00.863523 systemd[1]: session-14.scope: Deactivated successfully. Jan 24 12:16:00.866316 systemd-logind[1577]: Session 14 logged out. Waiting for processes to exit. Jan 24 12:16:00.870872 systemd-logind[1577]: Removed session 14. Jan 24 12:16:00.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.151:22-10.0.0.1:39960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:16:00.873558 systemd[1]: Started sshd@13-10.0.0.151:22-10.0.0.1:39960.service - OpenSSH per-connection server daemon (10.0.0.1:39960). Jan 24 12:16:00.941000 audit[5048]: USER_ACCT pid=5048 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:00.942709 sshd[5048]: Accepted publickey for core from 10.0.0.1 port 39960 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 12:16:00.942000 audit[5048]: CRED_ACQ pid=5048 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:00.942000 audit[5048]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff9cfe4b80 a2=3 a3=0 items=0 ppid=1 pid=5048 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:16:00.942000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:16:00.945007 sshd-session[5048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 12:16:00.952708 systemd-logind[1577]: New session 15 of user core. Jan 24 12:16:00.966384 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 24 12:16:00.969000 audit[5048]: USER_START pid=5048 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:00.971000 audit[5052]: CRED_ACQ pid=5052 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:01.075967 sshd[5052]: Connection closed by 10.0.0.1 port 39960 Jan 24 12:16:01.076402 sshd-session[5048]: pam_unix(sshd:session): session closed for user core Jan 24 12:16:01.077000 audit[5048]: USER_END pid=5048 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:01.078000 audit[5048]: CRED_DISP pid=5048 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:01.082943 systemd[1]: sshd@13-10.0.0.151:22-10.0.0.1:39960.service: Deactivated successfully. Jan 24 12:16:01.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.151:22-10.0.0.1:39960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:16:01.085479 systemd[1]: session-15.scope: Deactivated successfully. Jan 24 12:16:01.089789 systemd-logind[1577]: Session 15 logged out. Waiting for processes to exit. Jan 24 12:16:01.091466 systemd-logind[1577]: Removed session 15. Jan 24 12:16:06.095625 systemd[1]: Started sshd@14-10.0.0.151:22-10.0.0.1:48534.service - OpenSSH per-connection server daemon (10.0.0.1:48534). Jan 24 12:16:06.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.151:22-10.0.0.1:48534 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:16:06.099766 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 24 12:16:06.099798 kernel: audit: type=1130 audit(1769256966.094:768): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.151:22-10.0.0.1:48534 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:16:06.178000 audit[5073]: USER_ACCT pid=5073 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:06.178877 sshd[5073]: Accepted publickey for core from 10.0.0.1 port 48534 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 12:16:06.181679 sshd-session[5073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 12:16:06.188841 systemd-logind[1577]: New session 16 of user core. Jan 24 12:16:06.194236 kernel: audit: type=1101 audit(1769256966.178:769): pid=5073 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:06.194286 kernel: audit: type=1103 audit(1769256966.178:770): pid=5073 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:06.178000 audit[5073]: CRED_ACQ pid=5073 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:06.217208 kernel: audit: type=1006 audit(1769256966.179:771): pid=5073 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 24 12:16:06.217249 kernel: audit: type=1300 audit(1769256966.179:771): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcb16c70a0 a2=3 a3=0 items=0 ppid=1 pid=5073 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:16:06.179000 audit[5073]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcb16c70a0 a2=3 a3=0 items=0 ppid=1 pid=5073 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:16:06.179000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:16:06.240456 kernel: audit: type=1327 audit(1769256966.179:771): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:16:06.241677 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 24 12:16:06.244000 audit[5073]: USER_START pid=5073 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:06.248000 audit[5077]: CRED_ACQ pid=5077 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:06.280319 kernel: audit: type=1105 audit(1769256966.244:772): pid=5073 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:06.280354 kernel: audit: type=1103 audit(1769256966.248:773): pid=5077 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:06.360712 sshd[5077]: Connection closed by 10.0.0.1 port 48534 Jan 24 12:16:06.362892 sshd-session[5073]: pam_unix(sshd:session): session closed for user core Jan 24 12:16:06.364000 audit[5073]: USER_END pid=5073 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:06.364000 audit[5073]: CRED_DISP pid=5073 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:06.397994 kernel: audit: type=1106 audit(1769256966.364:774): pid=5073 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:06.398038 kernel: audit: type=1104 audit(1769256966.364:775): pid=5073 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:06.403774 systemd[1]: sshd@14-10.0.0.151:22-10.0.0.1:48534.service: Deactivated successfully. Jan 24 12:16:06.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.151:22-10.0.0.1:48534 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:16:06.406599 systemd[1]: session-16.scope: Deactivated successfully. Jan 24 12:16:06.408257 systemd-logind[1577]: Session 16 logged out. Waiting for processes to exit. Jan 24 12:16:06.412629 systemd[1]: Started sshd@15-10.0.0.151:22-10.0.0.1:48546.service - OpenSSH per-connection server daemon (10.0.0.1:48546). Jan 24 12:16:06.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.151:22-10.0.0.1:48546 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:16:06.413824 systemd-logind[1577]: Removed session 16. Jan 24 12:16:06.515000 audit[5091]: USER_ACCT pid=5091 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:06.517225 sshd[5091]: Accepted publickey for core from 10.0.0.1 port 48546 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 12:16:06.517000 audit[5091]: CRED_ACQ pid=5091 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:06.517000 audit[5091]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc258127e0 a2=3 a3=0 items=0 ppid=1 pid=5091 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:16:06.517000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:16:06.520464 sshd-session[5091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 12:16:06.527878 systemd-logind[1577]: New session 17 of user core. Jan 24 12:16:06.541617 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 24 12:16:06.544000 audit[5091]: USER_START pid=5091 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:06.547000 audit[5101]: CRED_ACQ pid=5101 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:06.873919 sshd[5101]: Connection closed by 10.0.0.1 port 48546 Jan 24 12:16:06.874676 sshd-session[5091]: pam_unix(sshd:session): session closed for user core Jan 24 12:16:06.877000 audit[5091]: USER_END pid=5091 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:06.877000 audit[5091]: CRED_DISP pid=5091 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:06.887420 systemd[1]: sshd@15-10.0.0.151:22-10.0.0.1:48546.service: Deactivated successfully. Jan 24 12:16:06.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.151:22-10.0.0.1:48546 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:16:06.890999 systemd[1]: session-17.scope: Deactivated successfully. Jan 24 12:16:06.893562 systemd-logind[1577]: Session 17 logged out. Waiting for processes to exit. Jan 24 12:16:06.898496 systemd[1]: Started sshd@16-10.0.0.151:22-10.0.0.1:48562.service - OpenSSH per-connection server daemon (10.0.0.1:48562). Jan 24 12:16:06.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.151:22-10.0.0.1:48562 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:16:06.899862 systemd-logind[1577]: Removed session 17. Jan 24 12:16:06.992000 audit[5114]: USER_ACCT pid=5114 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:06.994001 sshd[5114]: Accepted publickey for core from 10.0.0.1 port 48562 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 12:16:06.994000 audit[5114]: CRED_ACQ pid=5114 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:06.994000 audit[5114]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc31d42630 a2=3 a3=0 items=0 ppid=1 pid=5114 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:16:06.994000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:16:06.996264 sshd-session[5114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 12:16:07.005652 systemd-logind[1577]: New session 18 of user core. Jan 24 12:16:07.013391 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 24 12:16:07.017000 audit[5114]: USER_START pid=5114 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:07.020000 audit[5118]: CRED_ACQ pid=5118 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:07.590000 audit[5130]: NETFILTER_CFG table=filter:134 family=2 entries=26 op=nft_register_rule pid=5130 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:16:07.590000 audit[5130]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffca4d17600 a2=0 a3=7ffca4d175ec items=0 ppid=2961 pid=5130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:16:07.590000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:16:07.601000 audit[5130]: NETFILTER_CFG table=nat:135 family=2 entries=20 op=nft_register_rule pid=5130 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:16:07.601000 audit[5130]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffca4d17600 a2=0 a3=0 items=0 ppid=2961 pid=5130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:16:07.601000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:16:07.620223 sshd[5118]: Connection closed by 10.0.0.1 port 48562 Jan 24 12:16:07.621331 sshd-session[5114]: pam_unix(sshd:session): session closed for user core Jan 24 12:16:07.623000 audit[5114]: USER_END pid=5114 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:07.624000 audit[5114]: CRED_DISP pid=5114 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:07.631906 systemd[1]: sshd@16-10.0.0.151:22-10.0.0.1:48562.service: Deactivated successfully. Jan 24 12:16:07.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.151:22-10.0.0.1:48562 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:16:07.637494 systemd[1]: session-18.scope: Deactivated successfully. Jan 24 12:16:07.639599 systemd-logind[1577]: Session 18 logged out. Waiting for processes to exit. Jan 24 12:16:07.643810 systemd-logind[1577]: Removed session 18. Jan 24 12:16:07.651014 systemd[1]: Started sshd@17-10.0.0.151:22-10.0.0.1:48578.service - OpenSSH per-connection server daemon (10.0.0.1:48578). Jan 24 12:16:07.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.151:22-10.0.0.1:48578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:16:07.694000 audit[5139]: NETFILTER_CFG table=filter:136 family=2 entries=38 op=nft_register_rule pid=5139 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:16:07.694000 audit[5139]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff964b0570 a2=0 a3=7fff964b055c items=0 ppid=2961 pid=5139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:16:07.694000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:16:07.700000 audit[5139]: NETFILTER_CFG table=nat:137 family=2 entries=20 op=nft_register_rule pid=5139 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:16:07.700000 audit[5139]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff964b0570 a2=0 a3=0 items=0 ppid=2961 pid=5139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:16:07.700000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:16:07.729000 audit[5136]: USER_ACCT pid=5136 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:07.730684 sshd[5136]: Accepted publickey for core from 10.0.0.1 port 48578 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 12:16:07.730000 audit[5136]: CRED_ACQ pid=5136 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:07.732000 audit[5136]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff61be1480 a2=3 a3=0 items=0 ppid=1 pid=5136 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:16:07.732000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:16:07.733467 sshd-session[5136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 12:16:07.741601 systemd-logind[1577]: New session 19 of user core. Jan 24 12:16:07.752605 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 24 12:16:07.756000 audit[5136]: USER_START pid=5136 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:07.759000 audit[5141]: CRED_ACQ pid=5141 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:07.983454 sshd[5141]: Connection closed by 10.0.0.1 port 48578 Jan 24 12:16:07.982275 sshd-session[5136]: pam_unix(sshd:session): session closed for user core Jan 24 12:16:07.984000 audit[5136]: USER_END pid=5136 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:07.984000 audit[5136]: CRED_DISP pid=5136 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:07.994685 systemd[1]: sshd@17-10.0.0.151:22-10.0.0.1:48578.service: Deactivated successfully. Jan 24 12:16:07.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.151:22-10.0.0.1:48578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:16:08.000705 systemd[1]: session-19.scope: Deactivated successfully. Jan 24 12:16:08.003642 systemd-logind[1577]: Session 19 logged out. Waiting for processes to exit. Jan 24 12:16:08.008928 systemd[1]: Started sshd@18-10.0.0.151:22-10.0.0.1:48592.service - OpenSSH per-connection server daemon (10.0.0.1:48592). Jan 24 12:16:08.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.151:22-10.0.0.1:48592 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:16:08.010251 systemd-logind[1577]: Removed session 19. Jan 24 12:16:08.075000 audit[5153]: USER_ACCT pid=5153 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:08.077059 sshd[5153]: Accepted publickey for core from 10.0.0.1 port 48592 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 12:16:08.078000 audit[5153]: CRED_ACQ pid=5153 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:08.078000 audit[5153]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffca39bead0 a2=3 a3=0 items=0 ppid=1 pid=5153 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:16:08.078000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:16:08.079690 sshd-session[5153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 12:16:08.086860 systemd-logind[1577]: New session 20 of user core. Jan 24 12:16:08.095438 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 24 12:16:08.098000 audit[5153]: USER_START pid=5153 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:08.102000 audit[5157]: CRED_ACQ pid=5157 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:08.194634 sshd[5157]: Connection closed by 10.0.0.1 port 48592 Jan 24 12:16:08.194903 sshd-session[5153]: pam_unix(sshd:session): session closed for user core Jan 24 12:16:08.196000 audit[5153]: USER_END pid=5153 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:08.196000 audit[5153]: CRED_DISP pid=5153 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:08.199410 systemd[1]: sshd@18-10.0.0.151:22-10.0.0.1:48592.service: Deactivated successfully. Jan 24 12:16:08.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.151:22-10.0.0.1:48592 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:16:08.201911 systemd[1]: session-20.scope: Deactivated successfully. Jan 24 12:16:08.203767 systemd-logind[1577]: Session 20 logged out. Waiting for processes to exit. Jan 24 12:16:08.205067 systemd-logind[1577]: Removed session 20. Jan 24 12:16:08.321042 kubelet[2795]: E0124 12:16:08.320909 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79bb998d4d-94tq7" podUID="e2057d14-bb68-40a3-9abf-385c845f08ca" Jan 24 12:16:10.321896 kubelet[2795]: E0124 12:16:10.321858 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vfhsw" podUID="d101ff7c-9560-44ae-a339-4a5dc1053aeb" Jan 24 12:16:11.321922 kubelet[2795]: E0124 12:16:11.321077 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xv8ss" podUID="51f294f0-54db-45aa-b128-8a4414560ade" Jan 24 12:16:11.325849 kubelet[2795]: E0124 12:16:11.325786 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c54756445-jqv9f" podUID="e91bb632-3654-4ec6-9f05-a5289f173ff2" Jan 24 12:16:12.321746 kubelet[2795]: E0124 12:16:12.321645 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8479996d5-hmtzg" podUID="1e987738-bc46-4583-b000-c8f1bfbb02a7" Jan 24 12:16:12.322008 kubelet[2795]: E0124 12:16:12.321813 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8479996d5-zttk4" podUID="f06e4a34-950d-4dc0-91e1-512b91c976bf" Jan 24 12:16:12.587000 audit[5173]: NETFILTER_CFG table=filter:138 family=2 entries=26 op=nft_register_rule pid=5173 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:16:12.591652 kernel: kauditd_printk_skb: 57 callbacks suppressed Jan 24 12:16:12.592006 kernel: audit: type=1325 audit(1769256972.587:817): table=filter:138 family=2 entries=26 op=nft_register_rule pid=5173 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:16:12.587000 audit[5173]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffef4195e0 a2=0 a3=7fffef4195cc items=0 ppid=2961 pid=5173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:16:12.617885 kernel: audit: type=1300 audit(1769256972.587:817): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffef4195e0 a2=0 a3=7fffef4195cc items=0 ppid=2961 pid=5173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:16:12.587000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:16:12.626816 kernel: audit: type=1327 audit(1769256972.587:817): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:16:12.631000 audit[5173]: NETFILTER_CFG table=nat:139 family=2 entries=104 op=nft_register_chain pid=5173 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:16:12.631000 audit[5173]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7fffef4195e0 a2=0 a3=7fffef4195cc items=0 ppid=2961 pid=5173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:16:12.659370 kernel: audit: type=1325 audit(1769256972.631:818): table=nat:139 family=2 entries=104 op=nft_register_chain pid=5173 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:16:12.659408 kernel: audit: type=1300 audit(1769256972.631:818): arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7fffef4195e0 a2=0 a3=7fffef4195cc items=0 ppid=2961 pid=5173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:16:12.659438 kernel: audit: type=1327 audit(1769256972.631:818): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:16:12.631000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:16:13.208984 systemd[1]: Started sshd@19-10.0.0.151:22-10.0.0.1:45888.service - OpenSSH per-connection server daemon (10.0.0.1:45888). Jan 24 12:16:13.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.151:22-10.0.0.1:45888 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:16:13.223271 kernel: audit: type=1130 audit(1769256973.209:819): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.151:22-10.0.0.1:45888 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:16:13.286000 audit[5176]: USER_ACCT pid=5176 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:13.288667 sshd[5176]: Accepted publickey for core from 10.0.0.1 port 45888 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 12:16:13.290020 sshd-session[5176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 12:16:13.297952 systemd-logind[1577]: New session 21 of user core. Jan 24 12:16:13.288000 audit[5176]: CRED_ACQ pid=5176 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:13.316353 kernel: audit: type=1101 audit(1769256973.286:820): pid=5176 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:13.316396 kernel: audit: type=1103 audit(1769256973.288:821): pid=5176 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:13.316435 kernel: audit: type=1006 audit(1769256973.288:822): pid=5176 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 24 12:16:13.288000 audit[5176]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdea9b9f00 a2=3 a3=0 items=0 ppid=1 pid=5176 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:16:13.288000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:16:13.326716 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 24 12:16:13.331000 audit[5176]: USER_START pid=5176 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:13.334000 audit[5180]: CRED_ACQ pid=5180 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:13.419442 sshd[5180]: Connection closed by 10.0.0.1 port 45888 Jan 24 12:16:13.419851 sshd-session[5176]: pam_unix(sshd:session): session closed for user core Jan 24 12:16:13.421000 audit[5176]: USER_END pid=5176 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:13.421000 audit[5176]: CRED_DISP pid=5176 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:13.425594 systemd[1]: sshd@19-10.0.0.151:22-10.0.0.1:45888.service: Deactivated successfully. Jan 24 12:16:13.425000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.151:22-10.0.0.1:45888 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:16:13.428026 systemd[1]: session-21.scope: Deactivated successfully. Jan 24 12:16:13.429858 systemd-logind[1577]: Session 21 logged out. Waiting for processes to exit. Jan 24 12:16:13.431569 systemd-logind[1577]: Removed session 21. Jan 24 12:16:16.288783 kubelet[2795]: E0124 12:16:16.288626 2795 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:16:18.438418 systemd[1]: Started sshd@20-10.0.0.151:22-10.0.0.1:45900.service - OpenSSH per-connection server daemon (10.0.0.1:45900). Jan 24 12:16:18.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.151:22-10.0.0.1:45900 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:16:18.443311 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 24 12:16:18.443380 kernel: audit: type=1130 audit(1769256978.438:828): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.151:22-10.0.0.1:45900 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:16:18.558000 audit[5219]: USER_ACCT pid=5219 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:18.558739 sshd[5219]: Accepted publickey for core from 10.0.0.1 port 45900 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 12:16:18.562228 sshd-session[5219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 12:16:18.570256 systemd-logind[1577]: New session 22 of user core. Jan 24 12:16:18.560000 audit[5219]: CRED_ACQ pid=5219 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:18.580314 kernel: audit: type=1101 audit(1769256978.558:829): pid=5219 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:18.580643 kernel: audit: type=1103 audit(1769256978.560:830): pid=5219 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:18.607700 kernel: audit: type=1006 audit(1769256978.560:831): pid=5219 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 24 12:16:18.560000 audit[5219]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd34b0fda0 a2=3 a3=0 items=0 ppid=1 pid=5219 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:16:18.637837 kernel: audit: type=1300 audit(1769256978.560:831): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd34b0fda0 a2=3 a3=0 items=0 ppid=1 pid=5219 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:16:18.560000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:16:18.639423 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 24 12:16:18.647000 audit[5219]: USER_START pid=5219 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:18.678145 kernel: audit: type=1327 audit(1769256978.560:831): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:16:18.678210 kernel: audit: type=1105 audit(1769256978.647:832): pid=5219 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:18.654000 audit[5223]: CRED_ACQ pid=5223 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:18.694996 kernel: audit: type=1103 audit(1769256978.654:833): pid=5223 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:18.780339 sshd[5223]: Connection closed by 10.0.0.1 port 45900 Jan 24 12:16:18.781324 sshd-session[5219]: pam_unix(sshd:session): session closed for user core Jan 24 12:16:18.783000 audit[5219]: USER_END pid=5219 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:18.787916 systemd[1]: sshd@20-10.0.0.151:22-10.0.0.1:45900.service: Deactivated successfully. Jan 24 12:16:18.790326 systemd[1]: session-22.scope: Deactivated successfully. Jan 24 12:16:18.792870 systemd-logind[1577]: Session 22 logged out. Waiting for processes to exit. Jan 24 12:16:18.794334 systemd-logind[1577]: Removed session 22. Jan 24 12:16:18.783000 audit[5219]: CRED_DISP pid=5219 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:18.807498 kernel: audit: type=1106 audit(1769256978.783:834): pid=5219 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:18.807547 kernel: audit: type=1104 audit(1769256978.783:835): pid=5219 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:18.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.151:22-10.0.0.1:45900 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:16:20.321033 containerd[1599]: time="2026-01-24T12:16:20.320977032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 12:16:20.389466 containerd[1599]: time="2026-01-24T12:16:20.389387128Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:16:20.391221 containerd[1599]: time="2026-01-24T12:16:20.391003671Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 12:16:20.391221 containerd[1599]: time="2026-01-24T12:16:20.391163190Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 24 12:16:20.391361 kubelet[2795]: E0124 12:16:20.391320 2795 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 12:16:20.391361 kubelet[2795]: E0124 12:16:20.391352 2795 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 12:16:20.394773 kubelet[2795]: E0124 12:16:20.394620 2795 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pg6sz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-79bb998d4d-94tq7_calico-system(e2057d14-bb68-40a3-9abf-385c845f08ca): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 12:16:20.396124 kubelet[2795]: E0124 12:16:20.395986 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79bb998d4d-94tq7" podUID="e2057d14-bb68-40a3-9abf-385c845f08ca" Jan 24 12:16:22.321874 containerd[1599]: time="2026-01-24T12:16:22.321775880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 12:16:22.382629 containerd[1599]: time="2026-01-24T12:16:22.382563319Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:16:22.384520 containerd[1599]: time="2026-01-24T12:16:22.384365410Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 12:16:22.384520 containerd[1599]: time="2026-01-24T12:16:22.384442098Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 24 12:16:22.384731 kubelet[2795]: E0124 12:16:22.384614 2795 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 12:16:22.384731 kubelet[2795]: E0124 12:16:22.384643 2795 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 12:16:22.385243 kubelet[2795]: E0124 12:16:22.384745 2795 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:470b03a2107d413e8e61da949363329c,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gr58p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7c54756445-jqv9f_calico-system(e91bb632-3654-4ec6-9f05-a5289f173ff2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 12:16:22.387334 containerd[1599]: time="2026-01-24T12:16:22.387271801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 12:16:22.447495 containerd[1599]: time="2026-01-24T12:16:22.447338832Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:16:22.448825 containerd[1599]: time="2026-01-24T12:16:22.448734889Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 12:16:22.448825 containerd[1599]: time="2026-01-24T12:16:22.448812905Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 24 12:16:22.448987 kubelet[2795]: E0124 12:16:22.448933 2795 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 12:16:22.448987 kubelet[2795]: E0124 12:16:22.448960 2795 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 12:16:22.449073 kubelet[2795]: E0124 12:16:22.449023 2795 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gr58p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7c54756445-jqv9f_calico-system(e91bb632-3654-4ec6-9f05-a5289f173ff2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 12:16:22.450406 kubelet[2795]: E0124 12:16:22.450254 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c54756445-jqv9f" podUID="e91bb632-3654-4ec6-9f05-a5289f173ff2" Jan 24 12:16:23.326175 containerd[1599]: time="2026-01-24T12:16:23.325974905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 12:16:23.389182 containerd[1599]: time="2026-01-24T12:16:23.388988288Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:16:23.390906 containerd[1599]: time="2026-01-24T12:16:23.390776477Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 12:16:23.390906 containerd[1599]: time="2026-01-24T12:16:23.390864831Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 24 12:16:23.392367 kubelet[2795]: E0124 12:16:23.391634 2795 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 12:16:23.392367 kubelet[2795]: E0124 12:16:23.391684 2795 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 12:16:23.392367 kubelet[2795]: E0124 12:16:23.392019 2795 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pdwl8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-xv8ss_calico-system(51f294f0-54db-45aa-b128-8a4414560ade): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 12:16:23.394294 containerd[1599]: time="2026-01-24T12:16:23.393592370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 12:16:23.394349 kubelet[2795]: E0124 12:16:23.394262 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xv8ss" podUID="51f294f0-54db-45aa-b128-8a4414560ade" Jan 24 12:16:23.456817 containerd[1599]: time="2026-01-24T12:16:23.456653738Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:16:23.458614 containerd[1599]: time="2026-01-24T12:16:23.458550305Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 12:16:23.458974 containerd[1599]: time="2026-01-24T12:16:23.458618572Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 12:16:23.459012 kubelet[2795]: E0124 12:16:23.458763 2795 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 12:16:23.459012 kubelet[2795]: E0124 12:16:23.458964 2795 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 12:16:23.459518 kubelet[2795]: E0124 12:16:23.459397 2795 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8s4r4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8479996d5-zttk4_calico-apiserver(f06e4a34-950d-4dc0-91e1-512b91c976bf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 12:16:23.461578 kubelet[2795]: E0124 12:16:23.461191 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8479996d5-zttk4" podUID="f06e4a34-950d-4dc0-91e1-512b91c976bf" Jan 24 12:16:23.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.151:22-10.0.0.1:38740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:16:23.792786 systemd[1]: Started sshd@21-10.0.0.151:22-10.0.0.1:38740.service - OpenSSH per-connection server daemon (10.0.0.1:38740). Jan 24 12:16:23.794800 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 12:16:23.794909 kernel: audit: type=1130 audit(1769256983.791:837): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.151:22-10.0.0.1:38740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:16:23.863000 audit[5236]: USER_ACCT pid=5236 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:23.865008 sshd[5236]: Accepted publickey for core from 10.0.0.1 port 38740 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 12:16:23.867847 sshd-session[5236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 12:16:23.876152 kernel: audit: type=1101 audit(1769256983.863:838): pid=5236 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:23.876197 kernel: audit: type=1103 audit(1769256983.865:839): pid=5236 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:23.865000 audit[5236]: CRED_ACQ pid=5236 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:23.874163 systemd-logind[1577]: New session 23 of user core. Jan 24 12:16:23.887578 kernel: audit: type=1006 audit(1769256983.865:840): pid=5236 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 24 12:16:23.887617 kernel: audit: type=1300 audit(1769256983.865:840): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc155de90 a2=3 a3=0 items=0 ppid=1 pid=5236 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:16:23.865000 audit[5236]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc155de90 a2=3 a3=0 items=0 ppid=1 pid=5236 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:16:23.888355 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 24 12:16:23.865000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:16:23.901409 kernel: audit: type=1327 audit(1769256983.865:840): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:16:23.890000 audit[5236]: USER_START pid=5236 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:23.912995 kernel: audit: type=1105 audit(1769256983.890:841): pid=5236 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:23.893000 audit[5240]: CRED_ACQ pid=5240 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:23.926196 kernel: audit: type=1103 audit(1769256983.893:842): pid=5240 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:23.983695 sshd[5240]: Connection closed by 10.0.0.1 port 38740 Jan 24 12:16:23.984202 sshd-session[5236]: pam_unix(sshd:session): session closed for user core Jan 24 12:16:23.984000 audit[5236]: USER_END pid=5236 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:23.989470 systemd[1]: sshd@21-10.0.0.151:22-10.0.0.1:38740.service: Deactivated successfully. Jan 24 12:16:23.991872 systemd[1]: session-23.scope: Deactivated successfully. Jan 24 12:16:23.993265 systemd-logind[1577]: Session 23 logged out. Waiting for processes to exit. Jan 24 12:16:23.994843 systemd-logind[1577]: Removed session 23. Jan 24 12:16:23.985000 audit[5236]: CRED_DISP pid=5236 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:24.011145 kernel: audit: type=1106 audit(1769256983.984:843): pid=5236 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:24.011208 kernel: audit: type=1104 audit(1769256983.985:844): pid=5236 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:16:23.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.151:22-10.0.0.1:38740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:16:24.321446 containerd[1599]: time="2026-01-24T12:16:24.321332828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 12:16:24.386533 containerd[1599]: time="2026-01-24T12:16:24.386333754Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:16:24.387752 containerd[1599]: time="2026-01-24T12:16:24.387672181Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 12:16:24.387752 containerd[1599]: time="2026-01-24T12:16:24.387730512Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 12:16:24.387937 kubelet[2795]: E0124 12:16:24.387838 2795 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 12:16:24.387937 kubelet[2795]: E0124 12:16:24.387902 2795 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 12:16:24.388218 kubelet[2795]: E0124 12:16:24.387989 2795 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9sfpf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8479996d5-hmtzg_calico-apiserver(1e987738-bc46-4583-b000-c8f1bfbb02a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 12:16:24.389518 kubelet[2795]: E0124 12:16:24.389343 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8479996d5-hmtzg" podUID="1e987738-bc46-4583-b000-c8f1bfbb02a7" Jan 24 12:16:25.321199 kubelet[2795]: E0124 12:16:25.320890 2795 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:16:25.322046 containerd[1599]: time="2026-01-24T12:16:25.321944381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 12:16:25.391386 containerd[1599]: time="2026-01-24T12:16:25.391242188Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:16:25.392877 containerd[1599]: time="2026-01-24T12:16:25.392695948Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 12:16:25.392877 containerd[1599]: time="2026-01-24T12:16:25.392819419Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 24 12:16:25.393040 kubelet[2795]: E0124 12:16:25.392985 2795 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 12:16:25.393176 kubelet[2795]: E0124 12:16:25.393050 2795 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 12:16:25.393347 kubelet[2795]: E0124 12:16:25.393213 2795 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-brds4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vfhsw_calico-system(d101ff7c-9560-44ae-a339-4a5dc1053aeb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 12:16:25.395223 containerd[1599]: time="2026-01-24T12:16:25.395172795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 12:16:25.458321 containerd[1599]: time="2026-01-24T12:16:25.458269534Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:16:25.460446 containerd[1599]: time="2026-01-24T12:16:25.460314088Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 12:16:25.460446 containerd[1599]: time="2026-01-24T12:16:25.460378477Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 24 12:16:25.460708 kubelet[2795]: E0124 12:16:25.460658 2795 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 12:16:25.460708 kubelet[2795]: E0124 12:16:25.460689 2795 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 12:16:25.460864 kubelet[2795]: E0124 12:16:25.460768 2795 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-brds4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vfhsw_calico-system(d101ff7c-9560-44ae-a339-4a5dc1053aeb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 12:16:25.462147 kubelet[2795]: E0124 12:16:25.462017 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vfhsw" podUID="d101ff7c-9560-44ae-a339-4a5dc1053aeb"