Jan 24 00:41:15.986604 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 21:38:55 -00 2026 Jan 24 00:41:15.986640 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ccc6714d5701627f00a0daea097f593263f2ea87c850869ae25db66d36e22877 Jan 24 00:41:15.986658 kernel: BIOS-provided physical RAM map: Jan 24 00:41:15.986664 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 24 00:41:15.986670 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 24 00:41:15.986676 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 24 00:41:15.986683 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 24 00:41:15.986689 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 24 00:41:15.986739 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 24 00:41:15.986747 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 24 00:41:15.986756 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 24 00:41:15.986762 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 24 00:41:15.986768 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 24 00:41:15.986774 kernel: NX (Execute Disable) protection: active Jan 24 00:41:15.986781 kernel: APIC: Static calls initialized Jan 24 00:41:15.986790 kernel: SMBIOS 2.8 present. Jan 24 00:41:15.986825 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 24 00:41:15.986832 kernel: DMI: Memory slots populated: 1/1 Jan 24 00:41:15.986839 kernel: Hypervisor detected: KVM Jan 24 00:41:15.986845 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 24 00:41:15.986851 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 24 00:41:15.986858 kernel: kvm-clock: using sched offset of 19443077603 cycles Jan 24 00:41:15.986866 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 24 00:41:15.986873 kernel: tsc: Detected 2445.426 MHz processor Jan 24 00:41:15.986884 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 24 00:41:15.986891 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 24 00:41:15.987005 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 24 00:41:15.987019 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 24 00:41:15.987026 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 24 00:41:15.987145 kernel: Using GB pages for direct mapping Jan 24 00:41:15.987157 kernel: ACPI: Early table checksum verification disabled Jan 24 00:41:15.987173 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 24 00:41:15.987185 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:41:15.987198 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:41:15.987209 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:41:15.987216 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 24 00:41:15.987223 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:41:15.987230 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:41:15.987240 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:41:15.987248 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:41:15.987258 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 24 00:41:15.987265 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 24 00:41:15.987272 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 24 00:41:15.987279 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 24 00:41:15.987289 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 24 00:41:15.987296 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 24 00:41:15.987303 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 24 00:41:15.987310 kernel: No NUMA configuration found Jan 24 00:41:15.987318 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 24 00:41:15.987325 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jan 24 00:41:15.987334 kernel: Zone ranges: Jan 24 00:41:15.987342 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 24 00:41:15.987349 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 24 00:41:15.987356 kernel: Normal empty Jan 24 00:41:15.987363 kernel: Device empty Jan 24 00:41:15.987370 kernel: Movable zone start for each node Jan 24 00:41:15.987377 kernel: Early memory node ranges Jan 24 00:41:15.987387 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 24 00:41:15.987394 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 24 00:41:15.987401 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 24 00:41:15.987408 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 00:41:15.987415 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 24 00:41:15.987465 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 24 00:41:15.987473 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 24 00:41:15.987480 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 24 00:41:15.987490 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 24 00:41:15.987497 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 24 00:41:15.987532 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 24 00:41:15.987540 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 24 00:41:15.987547 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 24 00:41:15.987705 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 24 00:41:15.987714 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 24 00:41:15.987725 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 24 00:41:15.987732 kernel: TSC deadline timer available Jan 24 00:41:15.987740 kernel: CPU topo: Max. logical packages: 1 Jan 24 00:41:15.987747 kernel: CPU topo: Max. logical dies: 1 Jan 24 00:41:15.987754 kernel: CPU topo: Max. dies per package: 1 Jan 24 00:41:15.987761 kernel: CPU topo: Max. threads per core: 1 Jan 24 00:41:15.987897 kernel: CPU topo: Num. cores per package: 4 Jan 24 00:41:15.987946 kernel: CPU topo: Num. threads per package: 4 Jan 24 00:41:15.987954 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 24 00:41:15.987961 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 24 00:41:15.987968 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 24 00:41:15.987975 kernel: kvm-guest: setup PV sched yield Jan 24 00:41:15.987983 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 24 00:41:15.987990 kernel: Booting paravirtualized kernel on KVM Jan 24 00:41:15.987997 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 24 00:41:15.988007 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 24 00:41:15.988014 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 24 00:41:15.988021 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 24 00:41:15.988249 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 24 00:41:15.988262 kernel: kvm-guest: PV spinlocks enabled Jan 24 00:41:15.988270 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 24 00:41:15.988278 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ccc6714d5701627f00a0daea097f593263f2ea87c850869ae25db66d36e22877 Jan 24 00:41:15.988290 kernel: random: crng init done Jan 24 00:41:15.988297 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 24 00:41:15.988305 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 24 00:41:15.988312 kernel: Fallback order for Node 0: 0 Jan 24 00:41:15.988319 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jan 24 00:41:15.988326 kernel: Policy zone: DMA32 Jan 24 00:41:15.988336 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 24 00:41:15.988343 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 24 00:41:15.988350 kernel: ftrace: allocating 40128 entries in 157 pages Jan 24 00:41:15.988358 kernel: ftrace: allocated 157 pages with 5 groups Jan 24 00:41:15.988365 kernel: Dynamic Preempt: voluntary Jan 24 00:41:15.988372 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 24 00:41:15.988383 kernel: rcu: RCU event tracing is enabled. Jan 24 00:41:15.988393 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 24 00:41:15.988401 kernel: Trampoline variant of Tasks RCU enabled. Jan 24 00:41:15.988440 kernel: Rude variant of Tasks RCU enabled. Jan 24 00:41:15.988448 kernel: Tracing variant of Tasks RCU enabled. Jan 24 00:41:15.988456 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 24 00:41:15.988463 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 24 00:41:15.988470 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 00:41:15.988478 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 00:41:15.988488 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 00:41:15.988495 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 24 00:41:15.988502 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 24 00:41:15.988517 kernel: Console: colour VGA+ 80x25 Jan 24 00:41:15.988526 kernel: printk: legacy console [ttyS0] enabled Jan 24 00:41:15.988534 kernel: ACPI: Core revision 20240827 Jan 24 00:41:15.988541 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 24 00:41:15.988548 kernel: APIC: Switch to symmetric I/O mode setup Jan 24 00:41:15.988556 kernel: x2apic enabled Jan 24 00:41:15.988566 kernel: APIC: Switched APIC routing to: physical x2apic Jan 24 00:41:15.988601 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 24 00:41:15.988609 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 24 00:41:15.988617 kernel: kvm-guest: setup PV IPIs Jan 24 00:41:15.988627 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 24 00:41:15.988635 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 24 00:41:15.988642 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 24 00:41:15.988649 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 24 00:41:15.988657 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 24 00:41:15.988664 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 24 00:41:15.988672 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 24 00:41:15.988681 kernel: Spectre V2 : Mitigation: Retpolines Jan 24 00:41:15.988689 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 24 00:41:15.989012 kernel: Speculative Store Bypass: Vulnerable Jan 24 00:41:15.989022 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 24 00:41:15.989115 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 24 00:41:15.989131 kernel: active return thunk: srso_alias_return_thunk Jan 24 00:41:15.989145 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 24 00:41:15.989153 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 24 00:41:15.989160 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:41:15.989168 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 24 00:41:15.989175 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 24 00:41:15.989183 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 24 00:41:15.989190 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 24 00:41:15.989201 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 24 00:41:15.989208 kernel: Freeing SMP alternatives memory: 32K Jan 24 00:41:15.989216 kernel: pid_max: default: 32768 minimum: 301 Jan 24 00:41:15.989223 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 24 00:41:15.989230 kernel: landlock: Up and running. Jan 24 00:41:15.989238 kernel: SELinux: Initializing. Jan 24 00:41:15.989245 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:41:15.989253 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:41:15.989302 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 24 00:41:15.989310 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 24 00:41:15.989318 kernel: signal: max sigframe size: 1776 Jan 24 00:41:15.989325 kernel: rcu: Hierarchical SRCU implementation. Jan 24 00:41:15.989333 kernel: rcu: Max phase no-delay instances is 400. Jan 24 00:41:15.989341 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 24 00:41:15.989349 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 24 00:41:15.989358 kernel: smp: Bringing up secondary CPUs ... Jan 24 00:41:15.989366 kernel: smpboot: x86: Booting SMP configuration: Jan 24 00:41:15.989373 kernel: .... node #0, CPUs: #1 #2 #3 Jan 24 00:41:15.989381 kernel: smp: Brought up 1 node, 4 CPUs Jan 24 00:41:15.989388 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 24 00:41:15.989396 kernel: Memory: 2445296K/2571752K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15540K init, 2496K bss, 120520K reserved, 0K cma-reserved) Jan 24 00:41:15.989404 kernel: devtmpfs: initialized Jan 24 00:41:15.989413 kernel: x86/mm: Memory block size: 128MB Jan 24 00:41:15.989421 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 24 00:41:15.989428 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 24 00:41:15.989436 kernel: pinctrl core: initialized pinctrl subsystem Jan 24 00:41:15.989443 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 24 00:41:15.989451 kernel: audit: initializing netlink subsys (disabled) Jan 24 00:41:15.989458 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 24 00:41:15.989467 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 24 00:41:15.989475 kernel: audit: type=2000 audit(1769215260.563:1): state=initialized audit_enabled=0 res=1 Jan 24 00:41:15.989482 kernel: cpuidle: using governor menu Jan 24 00:41:15.989490 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 24 00:41:15.989497 kernel: dca service started, version 1.12.1 Jan 24 00:41:15.989505 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 24 00:41:15.989512 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 24 00:41:15.989522 kernel: PCI: Using configuration type 1 for base access Jan 24 00:41:15.989529 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 24 00:41:15.989537 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 24 00:41:15.989544 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 24 00:41:15.989552 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 24 00:41:15.989559 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 24 00:41:15.989567 kernel: ACPI: Added _OSI(Module Device) Jan 24 00:41:15.989576 kernel: ACPI: Added _OSI(Processor Device) Jan 24 00:41:15.989583 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 24 00:41:15.989591 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 24 00:41:15.989598 kernel: ACPI: Interpreter enabled Jan 24 00:41:15.989605 kernel: ACPI: PM: (supports S0 S3 S5) Jan 24 00:41:15.989613 kernel: ACPI: Using IOAPIC for interrupt routing Jan 24 00:41:15.989620 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 24 00:41:15.989630 kernel: PCI: Using E820 reservations for host bridge windows Jan 24 00:41:15.989637 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 24 00:41:15.989645 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 24 00:41:15.989892 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 24 00:41:15.991186 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 24 00:41:15.991685 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 24 00:41:15.991714 kernel: PCI host bridge to bus 0000:00 Jan 24 00:41:15.992174 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 24 00:41:15.992416 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 24 00:41:15.992671 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 24 00:41:15.992885 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 24 00:41:15.993320 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 24 00:41:15.993646 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 24 00:41:15.993984 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 24 00:41:15.994401 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 24 00:41:15.994682 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 24 00:41:15.994882 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 24 00:41:15.995294 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 24 00:41:15.995468 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 24 00:41:15.995805 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 24 00:41:15.996183 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 13671 usecs Jan 24 00:41:15.996430 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 24 00:41:15.996608 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jan 24 00:41:15.996776 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 24 00:41:15.997154 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 24 00:41:15.997403 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 24 00:41:15.997709 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jan 24 00:41:15.998769 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 24 00:41:15.999255 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 24 00:41:15.999520 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 24 00:41:15.999775 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jan 24 00:41:16.000195 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jan 24 00:41:16.000468 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 24 00:41:16.000734 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 24 00:41:16.001193 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 24 00:41:16.001417 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 24 00:41:16.001592 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 17578 usecs Jan 24 00:41:16.001822 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 24 00:41:16.002145 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jan 24 00:41:16.002368 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jan 24 00:41:16.002563 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 24 00:41:16.002732 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 24 00:41:16.002743 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 24 00:41:16.002752 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 24 00:41:16.002760 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 24 00:41:16.002768 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 24 00:41:16.002828 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 24 00:41:16.002837 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 24 00:41:16.002845 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 24 00:41:16.002853 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 24 00:41:16.002860 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 24 00:41:16.002868 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 24 00:41:16.002875 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 24 00:41:16.002886 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 24 00:41:16.002893 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 24 00:41:16.003178 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 24 00:41:16.003189 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 24 00:41:16.003196 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 24 00:41:16.003204 kernel: iommu: Default domain type: Translated Jan 24 00:41:16.003212 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 24 00:41:16.003224 kernel: PCI: Using ACPI for IRQ routing Jan 24 00:41:16.003232 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 24 00:41:16.003240 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 24 00:41:16.003247 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 24 00:41:16.003469 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 24 00:41:16.003641 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 24 00:41:16.003865 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 24 00:41:16.003880 kernel: vgaarb: loaded Jan 24 00:41:16.003889 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 24 00:41:16.003896 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 24 00:41:16.003959 kernel: clocksource: Switched to clocksource kvm-clock Jan 24 00:41:16.003967 kernel: VFS: Disk quotas dquot_6.6.0 Jan 24 00:41:16.003975 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 24 00:41:16.003982 kernel: pnp: PnP ACPI init Jan 24 00:41:16.004355 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 24 00:41:16.004380 kernel: pnp: PnP ACPI: found 6 devices Jan 24 00:41:16.004395 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 24 00:41:16.004607 kernel: NET: Registered PF_INET protocol family Jan 24 00:41:16.004619 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 24 00:41:16.004627 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 24 00:41:16.004635 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 24 00:41:16.004648 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 24 00:41:16.004655 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 24 00:41:16.004663 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 24 00:41:16.004670 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:41:16.004678 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:41:16.004686 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 24 00:41:16.004693 kernel: NET: Registered PF_XDP protocol family Jan 24 00:41:16.005138 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 24 00:41:16.005428 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 24 00:41:16.005633 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 24 00:41:16.005792 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 24 00:41:16.006220 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 24 00:41:16.006446 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 24 00:41:16.006471 kernel: PCI: CLS 0 bytes, default 64 Jan 24 00:41:16.006484 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 24 00:41:16.006497 kernel: Initialise system trusted keyrings Jan 24 00:41:16.006511 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 24 00:41:16.006524 kernel: Key type asymmetric registered Jan 24 00:41:16.006537 kernel: Asymmetric key parser 'x509' registered Jan 24 00:41:16.006552 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 24 00:41:16.006569 kernel: io scheduler mq-deadline registered Jan 24 00:41:16.006581 kernel: io scheduler kyber registered Jan 24 00:41:16.006591 kernel: io scheduler bfq registered Jan 24 00:41:16.006604 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 24 00:41:16.006619 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 24 00:41:16.006629 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 24 00:41:16.006640 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 24 00:41:16.006654 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 24 00:41:16.006671 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 24 00:41:16.006681 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 24 00:41:16.006693 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 24 00:41:16.006706 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 24 00:41:16.006717 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 24 00:41:16.007130 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 24 00:41:16.007389 kernel: rtc_cmos 00:04: registered as rtc0 Jan 24 00:41:16.007626 kernel: rtc_cmos 00:04: setting system clock to 2026-01-24T00:41:09 UTC (1769215269) Jan 24 00:41:16.007883 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 24 00:41:16.007961 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 24 00:41:16.007976 kernel: NET: Registered PF_INET6 protocol family Jan 24 00:41:16.007988 kernel: Segment Routing with IPv6 Jan 24 00:41:16.007999 kernel: In-situ OAM (IOAM) with IPv6 Jan 24 00:41:16.008016 kernel: NET: Registered PF_PACKET protocol family Jan 24 00:41:16.008116 kernel: Key type dns_resolver registered Jan 24 00:41:16.008131 kernel: IPI shorthand broadcast: enabled Jan 24 00:41:16.008143 kernel: sched_clock: Marking stable (8190025972, 1720835478)->(11281874148, -1371012698) Jan 24 00:41:16.008154 kernel: registered taskstats version 1 Jan 24 00:41:16.008166 kernel: Loading compiled-in X.509 certificates Jan 24 00:41:16.008177 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 08600fac738f210e3b32f727339edfe2b1af2e3d' Jan 24 00:41:16.008193 kernel: Demotion targets for Node 0: null Jan 24 00:41:16.008206 kernel: Key type .fscrypt registered Jan 24 00:41:16.008220 kernel: Key type fscrypt-provisioning registered Jan 24 00:41:16.008232 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 24 00:41:16.008242 kernel: ima: Allocated hash algorithm: sha1 Jan 24 00:41:16.008256 kernel: ima: No architecture policies found Jan 24 00:41:16.008269 kernel: clk: Disabling unused clocks Jan 24 00:41:16.008287 kernel: Freeing unused kernel image (initmem) memory: 15540K Jan 24 00:41:16.008300 kernel: Write protecting the kernel read-only data: 47104k Jan 24 00:41:16.008312 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 24 00:41:16.008326 kernel: Run /init as init process Jan 24 00:41:16.008338 kernel: with arguments: Jan 24 00:41:16.008352 kernel: /init Jan 24 00:41:16.008362 kernel: with environment: Jan 24 00:41:16.008378 kernel: HOME=/ Jan 24 00:41:16.008389 kernel: TERM=linux Jan 24 00:41:16.008400 kernel: SCSI subsystem initialized Jan 24 00:41:16.008412 kernel: libata version 3.00 loaded. Jan 24 00:41:16.008653 kernel: ahci 0000:00:1f.2: version 3.0 Jan 24 00:41:16.008670 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 24 00:41:16.008891 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 24 00:41:16.009309 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 24 00:41:16.010619 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 24 00:41:16.010889 kernel: scsi host0: ahci Jan 24 00:41:16.011279 kernel: scsi host1: ahci Jan 24 00:41:16.011547 kernel: scsi host2: ahci Jan 24 00:41:16.011799 kernel: scsi host3: ahci Jan 24 00:41:16.012325 kernel: scsi host4: ahci Jan 24 00:41:16.012579 kernel: scsi host5: ahci Jan 24 00:41:16.012598 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Jan 24 00:41:16.012611 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Jan 24 00:41:16.012623 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Jan 24 00:41:16.012640 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Jan 24 00:41:16.012652 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Jan 24 00:41:16.012664 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Jan 24 00:41:16.012675 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 24 00:41:16.012687 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 24 00:41:16.012699 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 24 00:41:16.012711 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 24 00:41:16.012725 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 24 00:41:16.012737 kernel: ata3.00: LPM support broken, forcing max_power Jan 24 00:41:16.012749 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 24 00:41:16.012760 kernel: ata3.00: applying bridge limits Jan 24 00:41:16.012772 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 24 00:41:16.012784 kernel: ata3.00: LPM support broken, forcing max_power Jan 24 00:41:16.012796 kernel: ata3.00: configured for UDMA/100 Jan 24 00:41:16.013245 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 24 00:41:16.013558 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 24 00:41:16.013882 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 24 00:41:16.014022 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 24 00:41:16.014128 kernel: GPT:16515071 != 27000831 Jan 24 00:41:16.014143 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 24 00:41:16.014162 kernel: GPT:16515071 != 27000831 Jan 24 00:41:16.014173 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 24 00:41:16.014187 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:41:16.016124 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 24 00:41:16.016150 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 24 00:41:16.016446 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 24 00:41:16.016467 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 24 00:41:16.016489 kernel: device-mapper: uevent: version 1.0.3 Jan 24 00:41:16.016502 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 24 00:41:16.016516 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 24 00:41:16.016528 kernel: raid6: avx2x4 gen() 21218 MB/s Jan 24 00:41:16.016542 kernel: raid6: avx2x2 gen() 20491 MB/s Jan 24 00:41:16.016556 kernel: raid6: avx2x1 gen() 11997 MB/s Jan 24 00:41:16.016568 kernel: raid6: using algorithm avx2x4 gen() 21218 MB/s Jan 24 00:41:16.016587 kernel: raid6: .... xor() 3137 MB/s, rmw enabled Jan 24 00:41:16.016600 kernel: raid6: using avx2x2 recovery algorithm Jan 24 00:41:16.016614 kernel: xor: automatically using best checksumming function avx Jan 24 00:41:16.016632 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 24 00:41:16.016646 kernel: BTRFS: device fsid 091bfa4a-922a-4e6e-abc1-a4b74083975f devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (182) Jan 24 00:41:16.016664 kernel: BTRFS info (device dm-0): first mount of filesystem 091bfa4a-922a-4e6e-abc1-a4b74083975f Jan 24 00:41:16.016676 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:41:16.016690 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 24 00:41:16.016705 kernel: BTRFS info (device dm-0): enabling free space tree Jan 24 00:41:16.016716 kernel: loop: module loaded Jan 24 00:41:16.016731 kernel: loop0: detected capacity change from 0 to 100560 Jan 24 00:41:16.016745 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 24 00:41:16.016763 systemd[1]: Successfully made /usr/ read-only. Jan 24 00:41:16.016781 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 24 00:41:16.016797 systemd[1]: Detected virtualization kvm. Jan 24 00:41:16.016808 systemd[1]: Detected architecture x86-64. Jan 24 00:41:16.016823 systemd[1]: Running in initrd. Jan 24 00:41:16.016841 systemd[1]: No hostname configured, using default hostname. Jan 24 00:41:16.016855 systemd[1]: Hostname set to . Jan 24 00:41:16.016869 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 24 00:41:16.016883 systemd[1]: Queued start job for default target initrd.target. Jan 24 00:41:16.016897 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 24 00:41:16.017704 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:41:16.017719 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:41:16.017741 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 24 00:41:16.017756 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:41:16.017771 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 24 00:41:16.017785 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 24 00:41:16.017800 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:41:16.017819 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:41:16.017832 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 24 00:41:16.017847 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:41:16.017862 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:41:16.017873 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:41:16.017889 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:41:16.017968 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:41:16.017992 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:41:16.018006 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 24 00:41:16.018018 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 00:41:16.018141 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 24 00:41:16.018160 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:41:16.018176 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:41:16.018189 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:41:16.018210 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:41:16.018225 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 24 00:41:16.018239 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 24 00:41:16.018253 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:41:16.018267 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 24 00:41:16.018283 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 24 00:41:16.018302 systemd[1]: Starting systemd-fsck-usr.service... Jan 24 00:41:16.018316 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:41:16.018330 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:41:16.018346 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:41:16.018364 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 24 00:41:16.018419 systemd-journald[320]: Collecting audit messages is enabled. Jan 24 00:41:16.018454 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:41:16.018474 kernel: audit: type=1130 audit(1769215275.978:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:16.018490 kernel: audit: type=1130 audit(1769215276.013:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:16.018503 systemd-journald[320]: Journal started Jan 24 00:41:16.018530 systemd-journald[320]: Runtime Journal (/run/log/journal/bf62c33b161c43959045d6b3d80b9b4b) is 6M, max 48.2M, 42.1M free. Jan 24 00:41:16.020017 systemd[1]: Finished systemd-fsck-usr.service. Jan 24 00:41:15.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:16.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:16.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:16.043709 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:41:16.043750 kernel: audit: type=1130 audit(1769215276.037:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:16.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:16.086150 kernel: audit: type=1130 audit(1769215276.056:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:16.089453 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:41:16.145290 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 24 00:41:16.155637 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:41:16.440449 kernel: Bridge firewalling registered Jan 24 00:41:16.161316 systemd-modules-load[321]: Inserted module 'br_netfilter' Jan 24 00:41:16.443193 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:41:16.494157 kernel: audit: type=1130 audit(1769215276.461:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:16.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:16.472755 systemd-tmpfiles[333]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 24 00:41:16.538656 kernel: audit: type=1130 audit(1769215276.505:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:16.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:16.488248 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:41:16.546458 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:41:16.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:16.590567 kernel: audit: type=1130 audit(1769215276.571:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:16.592442 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:41:16.635147 kernel: audit: type=1130 audit(1769215276.603:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:16.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:16.620546 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:41:16.680292 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:41:16.683482 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:41:16.773796 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:41:16.789755 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 24 00:41:16.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:16.840115 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:41:16.847589 kernel: audit: type=1130 audit(1769215276.780:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:16.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:16.871501 dracut-cmdline[354]: dracut-109 Jan 24 00:41:16.883619 kernel: audit: type=1130 audit(1769215276.852:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:16.883656 dracut-cmdline[354]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ccc6714d5701627f00a0daea097f593263f2ea87c850869ae25db66d36e22877 Jan 24 00:41:16.919376 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:41:16.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:16.931000 audit: BPF prog-id=6 op=LOAD Jan 24 00:41:16.936709 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:41:17.082201 systemd-resolved[372]: Positive Trust Anchors: Jan 24 00:41:17.082267 systemd-resolved[372]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:41:17.082273 systemd-resolved[372]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 24 00:41:17.082320 systemd-resolved[372]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:41:17.156170 systemd-resolved[372]: Defaulting to hostname 'linux'. Jan 24 00:41:17.172374 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:41:17.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:17.195411 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:41:17.346279 kernel: Loading iSCSI transport class v2.0-870. Jan 24 00:41:17.406443 kernel: iscsi: registered transport (tcp) Jan 24 00:41:17.466025 kernel: iscsi: registered transport (qla4xxx) Jan 24 00:41:17.466156 kernel: QLogic iSCSI HBA Driver Jan 24 00:41:17.563637 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 24 00:41:17.622009 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 00:41:17.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:17.636570 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 24 00:41:17.783645 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 24 00:41:17.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:17.787269 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 24 00:41:17.813000 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 24 00:41:17.916980 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:41:17.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:17.932000 audit: BPF prog-id=7 op=LOAD Jan 24 00:41:17.932000 audit: BPF prog-id=8 op=LOAD Jan 24 00:41:17.936329 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:41:17.990702 systemd-udevd[587]: Using default interface naming scheme 'v257'. Jan 24 00:41:18.030276 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:41:18.043315 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 24 00:41:18.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:18.103665 dracut-pre-trigger[633]: rd.md=0: removing MD RAID activation Jan 24 00:41:18.188416 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:41:18.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:18.218296 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:41:18.232786 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:41:18.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:18.266000 audit: BPF prog-id=9 op=LOAD Jan 24 00:41:18.268663 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:41:18.399977 systemd-networkd[727]: lo: Link UP Jan 24 00:41:18.400281 systemd-networkd[727]: lo: Gained carrier Jan 24 00:41:18.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:18.406482 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:41:18.418258 systemd[1]: Reached target network.target - Network. Jan 24 00:41:18.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:18.433600 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:41:18.458558 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 24 00:41:18.546557 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 24 00:41:18.594568 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 24 00:41:18.648326 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 24 00:41:18.690609 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 24 00:41:18.709243 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 24 00:41:18.730629 kernel: cryptd: max_cpu_qlen set to 1000 Jan 24 00:41:18.729307 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:41:18.729473 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:41:18.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:18.761584 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:41:18.777579 disk-uuid[769]: Primary Header is updated. Jan 24 00:41:18.777579 disk-uuid[769]: Secondary Entries is updated. Jan 24 00:41:18.777579 disk-uuid[769]: Secondary Header is updated. Jan 24 00:41:18.803420 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:41:18.870146 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 24 00:41:18.882538 systemd-networkd[727]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 24 00:41:18.882551 systemd-networkd[727]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:41:18.885367 systemd-networkd[727]: eth0: Link UP Jan 24 00:41:18.892381 systemd-networkd[727]: eth0: Gained carrier Jan 24 00:41:19.335589 kernel: AES CTR mode by8 optimization enabled Jan 24 00:41:18.892406 systemd-networkd[727]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 24 00:41:18.939411 systemd-networkd[727]: eth0: DHCPv4 address 10.0.0.71/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 24 00:41:19.102721 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 24 00:41:19.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:19.337988 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:41:19.338159 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:41:19.338217 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:41:19.354343 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 24 00:41:19.480396 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:41:19.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:19.520686 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:41:19.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:19.870881 disk-uuid[771]: Warning: The kernel is still using the old partition table. Jan 24 00:41:19.870881 disk-uuid[771]: The new table will be used at the next reboot or after you Jan 24 00:41:19.870881 disk-uuid[771]: run partprobe(8) or kpartx(8) Jan 24 00:41:19.870881 disk-uuid[771]: The operation has completed successfully. Jan 24 00:41:19.958898 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 24 00:41:19.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:19.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:19.959314 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 24 00:41:19.974346 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 24 00:41:20.180853 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (862) Jan 24 00:41:20.192398 kernel: BTRFS info (device vda6): first mount of filesystem 98bf19d7-5744-4291-8d20-e6403ff726cc Jan 24 00:41:20.192453 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:41:20.254129 kernel: BTRFS info (device vda6): turning on async discard Jan 24 00:41:20.254198 kernel: BTRFS info (device vda6): enabling free space tree Jan 24 00:41:20.285180 kernel: BTRFS info (device vda6): last unmount of filesystem 98bf19d7-5744-4291-8d20-e6403ff726cc Jan 24 00:41:20.312407 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 24 00:41:20.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:20.346495 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 24 00:41:20.542339 systemd-networkd[727]: eth0: Gained IPv6LL Jan 24 00:41:21.017780 ignition[881]: Ignition 2.24.0 Jan 24 00:41:21.017841 ignition[881]: Stage: fetch-offline Jan 24 00:41:21.017914 ignition[881]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:41:21.018999 ignition[881]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:41:21.019517 ignition[881]: parsed url from cmdline: "" Jan 24 00:41:21.019524 ignition[881]: no config URL provided Jan 24 00:41:21.019533 ignition[881]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:41:21.019558 ignition[881]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:41:21.019705 ignition[881]: op(1): [started] loading QEMU firmware config module Jan 24 00:41:21.019713 ignition[881]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 24 00:41:21.061803 ignition[881]: op(1): [finished] loading QEMU firmware config module Jan 24 00:41:21.522291 ignition[881]: parsing config with SHA512: 9485ecba59f0ea67fb0f8996a6797990004d2876ebc34cf95e78ffee1cc077b69a31c7932f4ea2adec7b6d8ef017c817a96f60c86b3f03d9abb58e7523e1826b Jan 24 00:41:21.560239 unknown[881]: fetched base config from "system" Jan 24 00:41:21.560923 ignition[881]: fetch-offline: fetch-offline passed Jan 24 00:41:21.603246 kernel: kauditd_printk_skb: 21 callbacks suppressed Jan 24 00:41:21.603285 kernel: audit: type=1130 audit(1769215281.576:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:21.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:21.560282 unknown[881]: fetched user config from "qemu" Jan 24 00:41:21.561129 ignition[881]: Ignition finished successfully Jan 24 00:41:21.568009 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:41:21.576505 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 24 00:41:21.578260 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 24 00:41:21.684701 ignition[892]: Ignition 2.24.0 Jan 24 00:41:21.684758 ignition[892]: Stage: kargs Jan 24 00:41:21.688244 ignition[892]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:41:21.688260 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:41:21.700504 ignition[892]: kargs: kargs passed Jan 24 00:41:21.700579 ignition[892]: Ignition finished successfully Jan 24 00:41:21.729827 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 24 00:41:21.766300 kernel: audit: type=1130 audit(1769215281.745:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:21.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:21.749308 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 24 00:41:21.832610 ignition[899]: Ignition 2.24.0 Jan 24 00:41:21.832665 ignition[899]: Stage: disks Jan 24 00:41:21.834177 ignition[899]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:41:21.834194 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:41:21.838824 ignition[899]: disks: disks passed Jan 24 00:41:21.840720 ignition[899]: Ignition finished successfully Jan 24 00:41:21.880891 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 24 00:41:21.882379 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 24 00:41:21.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:21.901756 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 00:41:21.924604 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:41:21.940295 kernel: audit: type=1130 audit(1769215281.880:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:21.940204 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:41:21.950615 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:41:21.968463 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 24 00:41:22.092634 systemd-fsck[908]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 24 00:41:22.109620 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 24 00:41:22.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:22.131224 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 24 00:41:22.167344 kernel: audit: type=1130 audit(1769215282.118:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:22.676269 kernel: EXT4-fs (vda9): mounted filesystem 4e30a7d6-83d2-471c-98e0-68a57c0656af r/w with ordered data mode. Quota mode: none. Jan 24 00:41:22.677363 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 24 00:41:22.684908 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 24 00:41:22.697606 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:41:22.743530 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 24 00:41:22.752020 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 24 00:41:22.752173 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 24 00:41:22.752218 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:41:22.774568 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (916) Jan 24 00:41:22.815831 kernel: BTRFS info (device vda6): first mount of filesystem 98bf19d7-5744-4291-8d20-e6403ff726cc Jan 24 00:41:22.815910 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:41:22.828913 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 24 00:41:22.844414 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 24 00:41:22.886684 kernel: BTRFS info (device vda6): turning on async discard Jan 24 00:41:22.886767 kernel: BTRFS info (device vda6): enabling free space tree Jan 24 00:41:22.897447 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:41:23.651820 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 24 00:41:23.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:23.677226 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 24 00:41:23.697173 kernel: audit: type=1130 audit(1769215283.672:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:23.694270 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 24 00:41:23.739283 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 24 00:41:23.755568 kernel: BTRFS info (device vda6): last unmount of filesystem 98bf19d7-5744-4291-8d20-e6403ff726cc Jan 24 00:41:23.851788 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 24 00:41:23.872342 ignition[1014]: INFO : Ignition 2.24.0 Jan 24 00:41:23.872342 ignition[1014]: INFO : Stage: mount Jan 24 00:41:23.872342 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:41:23.872342 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:41:23.915185 kernel: audit: type=1130 audit(1769215283.871:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:23.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:23.908534 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 24 00:41:23.948541 kernel: audit: type=1130 audit(1769215283.924:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:23.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:23.948640 ignition[1014]: INFO : mount: mount passed Jan 24 00:41:23.948640 ignition[1014]: INFO : Ignition finished successfully Jan 24 00:41:23.930491 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 24 00:41:24.007564 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:41:24.077885 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1026) Jan 24 00:41:24.089466 kernel: BTRFS info (device vda6): first mount of filesystem 98bf19d7-5744-4291-8d20-e6403ff726cc Jan 24 00:41:24.089520 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:41:24.137867 kernel: BTRFS info (device vda6): turning on async discard Jan 24 00:41:24.138004 kernel: BTRFS info (device vda6): enabling free space tree Jan 24 00:41:24.148732 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:41:24.252511 ignition[1043]: INFO : Ignition 2.24.0 Jan 24 00:41:24.252511 ignition[1043]: INFO : Stage: files Jan 24 00:41:24.277530 ignition[1043]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:41:24.277530 ignition[1043]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:41:24.277530 ignition[1043]: DEBUG : files: compiled without relabeling support, skipping Jan 24 00:41:24.316615 ignition[1043]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 24 00:41:24.316615 ignition[1043]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 24 00:41:24.316615 ignition[1043]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 24 00:41:24.316615 ignition[1043]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 24 00:41:24.316615 ignition[1043]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 24 00:41:24.313617 unknown[1043]: wrote ssh authorized keys file for user: core Jan 24 00:41:24.361266 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 24 00:41:24.361266 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 24 00:41:24.477279 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 24 00:41:24.696668 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 24 00:41:24.696668 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 24 00:41:24.721825 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 24 00:41:24.721825 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:41:24.721825 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:41:24.721825 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:41:24.721825 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:41:24.721825 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:41:24.721825 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:41:24.721825 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:41:24.721825 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:41:24.721825 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 00:41:24.721825 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 00:41:24.721825 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 00:41:24.721825 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 24 00:41:25.073303 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 24 00:41:25.843438 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 00:41:25.843438 ignition[1043]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 24 00:41:25.865160 ignition[1043]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:41:25.865160 ignition[1043]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:41:25.865160 ignition[1043]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 24 00:41:25.865160 ignition[1043]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 24 00:41:25.865160 ignition[1043]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 24 00:41:25.865160 ignition[1043]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 24 00:41:25.865160 ignition[1043]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 24 00:41:25.865160 ignition[1043]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 24 00:41:26.109511 ignition[1043]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 24 00:41:26.179183 ignition[1043]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 24 00:41:26.179183 ignition[1043]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 24 00:41:26.179183 ignition[1043]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 24 00:41:26.179183 ignition[1043]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 24 00:41:26.179183 ignition[1043]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:41:26.179183 ignition[1043]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:41:26.179183 ignition[1043]: INFO : files: files passed Jan 24 00:41:26.179183 ignition[1043]: INFO : Ignition finished successfully Jan 24 00:41:26.191295 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 24 00:41:26.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:26.267520 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 24 00:41:26.305185 kernel: audit: type=1130 audit(1769215286.261:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:26.299922 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 24 00:41:26.359555 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 24 00:41:26.373183 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 24 00:41:26.386169 initrd-setup-root-after-ignition[1073]: grep: /sysroot/oem/oem-release: No such file or directory Jan 24 00:41:26.403699 initrd-setup-root-after-ignition[1075]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:41:26.403699 initrd-setup-root-after-ignition[1075]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:41:26.520945 kernel: audit: type=1130 audit(1769215286.410:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:26.522274 kernel: audit: type=1131 audit(1769215286.410:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:26.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:26.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:26.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:26.457343 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:41:26.555288 initrd-setup-root-after-ignition[1079]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:41:26.503254 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 24 00:41:26.544295 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 24 00:41:26.764738 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 24 00:41:26.765166 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 24 00:41:26.875622 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:41:26.875686 kernel: audit: type=1130 audit(1769215286.794:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:26.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:26.795909 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 24 00:41:26.899446 kernel: audit: type=1131 audit(1769215286.794:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:26.794000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:26.806843 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 24 00:41:26.813488 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 24 00:41:26.815556 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 24 00:41:27.002199 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:41:27.043313 kernel: audit: type=1130 audit(1769215287.002:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:27.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:27.006596 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 24 00:41:27.154186 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 24 00:41:27.154493 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:41:27.224470 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:41:27.345881 kernel: audit: type=1131 audit(1769215287.293:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:27.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:27.292473 systemd[1]: Stopped target timers.target - Timer Units. Jan 24 00:41:27.292920 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 24 00:41:27.293293 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:41:27.294728 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 24 00:41:27.363778 systemd[1]: Stopped target basic.target - Basic System. Jan 24 00:41:27.384118 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 24 00:41:27.384641 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:41:27.418317 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 24 00:41:27.418848 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 24 00:41:27.462293 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 24 00:41:27.570670 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:41:27.588772 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 24 00:41:27.656682 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 24 00:41:27.769933 kernel: audit: type=1131 audit(1769215287.715:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:27.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:27.676555 systemd[1]: Stopped target swap.target - Swaps. Jan 24 00:41:27.701137 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 24 00:41:27.701942 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:41:27.770612 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:41:27.887927 kernel: audit: type=1131 audit(1769215287.855:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:27.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:27.807549 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:41:27.814826 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 24 00:41:27.815510 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:41:27.962669 kernel: audit: type=1131 audit(1769215287.913:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:27.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:27.824410 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 24 00:41:27.828827 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 24 00:41:27.888514 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 24 00:41:27.888844 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:41:27.913676 systemd[1]: Stopped target paths.target - Path Units. Jan 24 00:41:27.974410 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 24 00:41:27.975121 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:41:28.069149 systemd[1]: Stopped target slices.target - Slice Units. Jan 24 00:41:28.157883 systemd[1]: Stopped target sockets.target - Socket Units. Jan 24 00:41:28.186159 systemd[1]: iscsid.socket: Deactivated successfully. Jan 24 00:41:28.186421 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:41:28.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:28.202778 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 24 00:41:28.412784 kernel: audit: type=1131 audit(1769215288.318:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:28.412832 kernel: audit: type=1131 audit(1769215288.366:52): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:28.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:28.207761 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:41:28.289861 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 24 00:41:28.290775 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 24 00:41:28.303439 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 24 00:41:28.305311 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:41:28.318673 systemd[1]: ignition-files.service: Deactivated successfully. Jan 24 00:41:28.318956 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 24 00:41:28.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:28.382955 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 24 00:41:28.504645 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 24 00:41:28.755689 kernel: audit: type=1131 audit(1769215288.682:53): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:28.621219 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 24 00:41:28.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:28.622941 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:41:28.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:28.756967 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 24 00:41:28.759556 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:41:28.822271 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 24 00:41:28.822625 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:41:28.919374 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 24 00:41:30.368906 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1544126198 wd_nsec: 1544126048 Jan 24 00:41:30.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:30.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:30.385379 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 24 00:41:30.486227 ignition[1100]: INFO : Ignition 2.24.0 Jan 24 00:41:30.486227 ignition[1100]: INFO : Stage: umount Jan 24 00:41:30.499911 ignition[1100]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:41:30.499911 ignition[1100]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:41:30.499911 ignition[1100]: INFO : umount: umount passed Jan 24 00:41:30.499911 ignition[1100]: INFO : Ignition finished successfully Jan 24 00:41:30.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:30.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:30.501639 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 24 00:41:30.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:30.503272 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 24 00:41:30.618000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:30.504965 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 24 00:41:30.514940 systemd[1]: Stopped target network.target - Network. Jan 24 00:41:30.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:30.537466 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 24 00:41:30.538528 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 24 00:41:30.560605 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 24 00:41:30.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:30.560889 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 24 00:41:30.592848 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 24 00:41:30.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:30.596170 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 24 00:41:30.620143 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 24 00:41:30.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:30.620365 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 24 00:41:30.667859 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 24 00:41:30.692529 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 24 00:41:30.881000 audit: BPF prog-id=6 op=UNLOAD Jan 24 00:41:30.708288 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 24 00:41:30.708670 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 24 00:41:30.759376 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 24 00:41:30.912000 audit: BPF prog-id=9 op=UNLOAD Jan 24 00:41:30.759651 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 24 00:41:30.810275 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 24 00:41:30.810674 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 24 00:41:30.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:30.884592 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 24 00:41:30.947682 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 24 00:41:30.947962 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:41:30.979283 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 24 00:41:30.979488 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 24 00:41:31.016963 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 24 00:41:31.074593 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 24 00:41:31.075382 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:41:31.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:31.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:31.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:31.106302 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 00:41:31.106569 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:41:31.120418 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 24 00:41:31.120572 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 24 00:41:31.156282 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:41:31.267914 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 24 00:41:31.268686 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:41:31.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:31.294667 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 24 00:41:31.294801 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 24 00:41:31.310962 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 24 00:41:31.311221 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:41:31.350871 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 24 00:41:31.351176 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:41:31.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:31.407929 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 24 00:41:31.412922 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 24 00:41:31.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:31.478772 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:41:31.481860 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:41:31.520000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:31.556448 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 24 00:41:31.570707 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 24 00:41:31.570912 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 00:41:31.606812 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 24 00:41:31.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:31.607202 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:41:31.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:31.666324 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 24 00:41:31.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:31.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:31.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:31.666473 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:41:31.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:31.682733 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 24 00:41:31.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:31.682862 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:41:31.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:31.693491 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:41:31.693626 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:41:31.695271 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 24 00:41:31.695461 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 24 00:41:31.736925 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 24 00:41:31.737669 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 24 00:41:31.812960 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 24 00:41:31.824473 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 24 00:41:31.901603 systemd[1]: Switching root. Jan 24 00:41:32.010566 systemd-journald[320]: Journal stopped Jan 24 00:41:36.053677 systemd-journald[320]: Received SIGTERM from PID 1 (systemd). Jan 24 00:41:36.053782 kernel: kauditd_printk_skb: 30 callbacks suppressed Jan 24 00:41:36.053811 kernel: audit: type=1335 audit(1769215292.021:84): pid=320 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=kernel comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" nl-mcgrp=1 op=disconnect res=1 Jan 24 00:41:36.053839 kernel: SELinux: policy capability network_peer_controls=1 Jan 24 00:41:36.053856 kernel: SELinux: policy capability open_perms=1 Jan 24 00:41:36.053874 kernel: SELinux: policy capability extended_socket_class=1 Jan 24 00:41:36.053895 kernel: SELinux: policy capability always_check_network=0 Jan 24 00:41:36.054493 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 24 00:41:36.054521 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 24 00:41:36.054541 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 24 00:41:36.054561 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 24 00:41:36.054579 kernel: SELinux: policy capability userspace_initial_context=0 Jan 24 00:41:36.054596 kernel: audit: type=1403 audit(1769215292.374:85): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 24 00:41:36.054621 systemd[1]: Successfully loaded SELinux policy in 181.067ms. Jan 24 00:41:36.054649 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 25.968ms. Jan 24 00:41:36.054671 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 24 00:41:36.054698 systemd[1]: Detected virtualization kvm. Jan 24 00:41:36.054719 systemd[1]: Detected architecture x86-64. Jan 24 00:41:36.054739 systemd[1]: Detected first boot. Jan 24 00:41:36.054759 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 24 00:41:36.054781 kernel: audit: type=1334 audit(1769215292.586:86): prog-id=10 op=LOAD Jan 24 00:41:36.054802 kernel: audit: type=1334 audit(1769215292.588:87): prog-id=10 op=UNLOAD Jan 24 00:41:36.054822 kernel: audit: type=1334 audit(1769215292.588:88): prog-id=11 op=LOAD Jan 24 00:41:36.054841 kernel: audit: type=1334 audit(1769215292.588:89): prog-id=11 op=UNLOAD Jan 24 00:41:36.054860 zram_generator::config[1143]: No configuration found. Jan 24 00:41:36.054881 kernel: Guest personality initialized and is inactive Jan 24 00:41:36.054900 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 24 00:41:36.054923 kernel: Initialized host personality Jan 24 00:41:36.054942 kernel: NET: Registered PF_VSOCK protocol family Jan 24 00:41:36.054963 systemd[1]: Populated /etc with preset unit settings. Jan 24 00:41:36.054984 kernel: audit: type=1334 audit(1769215294.254:90): prog-id=12 op=LOAD Jan 24 00:41:36.055003 kernel: audit: type=1334 audit(1769215294.258:91): prog-id=3 op=UNLOAD Jan 24 00:41:36.055134 kernel: audit: type=1334 audit(1769215294.258:92): prog-id=13 op=LOAD Jan 24 00:41:36.055152 kernel: audit: type=1334 audit(1769215294.258:93): prog-id=14 op=LOAD Jan 24 00:41:36.055176 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 24 00:41:36.055196 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 24 00:41:36.055215 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 24 00:41:36.055244 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 24 00:41:36.055264 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 24 00:41:36.055284 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 24 00:41:36.055301 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 24 00:41:36.055328 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 24 00:41:36.055347 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 24 00:41:36.055374 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 24 00:41:36.055394 systemd[1]: Created slice user.slice - User and Session Slice. Jan 24 00:41:36.055413 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:41:36.055439 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:41:36.055457 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 24 00:41:36.055477 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 24 00:41:36.055495 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 24 00:41:36.055517 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:41:36.055539 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 24 00:41:36.055557 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:41:36.055575 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:41:36.055596 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 24 00:41:36.055613 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 24 00:41:36.055630 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 24 00:41:36.055647 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 24 00:41:36.055666 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:41:36.055683 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:41:36.055700 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 24 00:41:36.055721 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:41:36.055738 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:41:36.055754 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 24 00:41:36.055773 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 24 00:41:36.055790 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 24 00:41:36.055807 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 24 00:41:36.055827 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 24 00:41:36.055848 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:41:36.055866 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 24 00:41:36.055882 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 24 00:41:36.055899 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:41:36.055916 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:41:36.055932 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 24 00:41:36.055950 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 24 00:41:36.055972 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 24 00:41:36.055992 systemd[1]: Mounting media.mount - External Media Directory... Jan 24 00:41:36.056744 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:41:36.056784 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 24 00:41:36.056803 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 24 00:41:36.056822 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 24 00:41:36.056840 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 24 00:41:36.056865 systemd[1]: Reached target machines.target - Containers. Jan 24 00:41:36.056884 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 24 00:41:36.056903 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:41:36.056927 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:41:36.056944 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 24 00:41:36.056966 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:41:36.056982 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:41:36.057159 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:41:36.057185 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 24 00:41:36.057206 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:41:36.057225 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 24 00:41:36.057366 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 24 00:41:36.057389 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 24 00:41:36.057414 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 24 00:41:36.057433 systemd[1]: Stopped systemd-fsck-usr.service. Jan 24 00:41:36.057454 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 24 00:41:36.057473 kernel: fuse: init (API version 7.41) Jan 24 00:41:36.057490 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:41:36.057513 kernel: ACPI: bus type drm_connector registered Jan 24 00:41:36.057533 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:41:36.057551 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 24 00:41:36.057610 systemd-journald[1222]: Collecting audit messages is enabled. Jan 24 00:41:36.057655 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 24 00:41:36.057682 systemd-journald[1222]: Journal started Jan 24 00:41:36.057717 systemd-journald[1222]: Runtime Journal (/run/log/journal/bf62c33b161c43959045d6b3d80b9b4b) is 6M, max 48.2M, 42.1M free. Jan 24 00:41:34.939000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 24 00:41:35.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:35.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:35.886000 audit: BPF prog-id=14 op=UNLOAD Jan 24 00:41:35.886000 audit: BPF prog-id=13 op=UNLOAD Jan 24 00:41:35.891000 audit: BPF prog-id=15 op=LOAD Jan 24 00:41:35.896000 audit: BPF prog-id=16 op=LOAD Jan 24 00:41:35.896000 audit: BPF prog-id=17 op=LOAD Jan 24 00:41:36.047000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 24 00:41:36.047000 audit[1222]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffddf861210 a2=4000 a3=0 items=0 ppid=1 pid=1222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:41:36.047000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 24 00:41:34.212471 systemd[1]: Queued start job for default target multi-user.target. Jan 24 00:41:34.260422 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 24 00:41:34.262217 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 24 00:41:34.264268 systemd[1]: systemd-journald.service: Consumed 1.952s CPU time. Jan 24 00:41:36.093310 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 24 00:41:36.116152 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:41:36.158796 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:41:36.180953 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:41:36.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:36.181423 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 24 00:41:36.191409 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 24 00:41:36.202829 systemd[1]: Mounted media.mount - External Media Directory. Jan 24 00:41:36.213574 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 24 00:41:36.237810 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 24 00:41:36.253935 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 24 00:41:36.264832 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 24 00:41:36.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:36.278730 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:41:36.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:36.289148 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 24 00:41:36.289597 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 24 00:41:36.299208 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:41:36.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:36.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:36.300859 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:41:36.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:36.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:36.314507 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:41:36.314907 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:41:36.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:36.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:36.339583 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:41:36.339994 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:41:36.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:36.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:36.359790 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 24 00:41:36.360586 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 24 00:41:36.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:36.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:36.378779 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:41:36.383905 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:41:36.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:36.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:36.390643 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:41:36.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:36.405820 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 00:41:36.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:36.417773 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 24 00:41:36.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:36.441895 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 24 00:41:36.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:36.483848 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:41:36.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:36.503865 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 24 00:41:36.512461 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 24 00:41:36.528669 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 24 00:41:36.564346 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 24 00:41:36.574724 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 24 00:41:36.574837 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:41:36.585395 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 24 00:41:36.600418 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:41:36.600677 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 24 00:41:36.607211 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 24 00:41:36.633391 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 24 00:41:36.646376 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:41:36.653957 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 24 00:41:36.666467 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:41:36.671360 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:41:36.681696 systemd-journald[1222]: Time spent on flushing to /var/log/journal/bf62c33b161c43959045d6b3d80b9b4b is 202.800ms for 1122 entries. Jan 24 00:41:36.681696 systemd-journald[1222]: System Journal (/var/log/journal/bf62c33b161c43959045d6b3d80b9b4b) is 8M, max 163.5M, 155.5M free. Jan 24 00:41:37.305837 systemd-journald[1222]: Received client request to flush runtime journal. Jan 24 00:41:37.306248 kernel: loop1: detected capacity change from 0 to 50784 Jan 24 00:41:36.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:36.695329 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 24 00:41:36.716681 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:41:36.737737 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 24 00:41:36.757301 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 24 00:41:36.778996 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 24 00:41:36.806749 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 24 00:41:36.821951 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 24 00:41:37.517399 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 24 00:41:37.551150 kernel: kauditd_printk_skb: 38 callbacks suppressed Jan 24 00:41:37.551400 kernel: audit: type=1130 audit(1769215297.529:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:37.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:37.566751 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:41:37.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:37.610175 kernel: audit: type=1130 audit(1769215297.581:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:37.613906 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Jan 24 00:41:37.613977 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Jan 24 00:41:37.647449 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:41:37.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:37.681430 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 24 00:41:37.682304 kernel: audit: type=1130 audit(1769215297.665:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:37.690301 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 24 00:41:37.695800 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 24 00:41:37.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:37.731801 kernel: audit: type=1130 audit(1769215297.705:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:37.758366 kernel: loop2: detected capacity change from 0 to 111560 Jan 24 00:41:37.960909 kernel: loop3: detected capacity change from 0 to 219144 Jan 24 00:41:38.023404 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 24 00:41:38.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:38.049884 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 24 00:41:38.047000 audit: BPF prog-id=18 op=LOAD Jan 24 00:41:38.075496 kernel: audit: type=1130 audit(1769215298.044:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:38.075546 kernel: audit: type=1334 audit(1769215298.047:135): prog-id=18 op=LOAD Jan 24 00:41:38.075906 kernel: audit: type=1334 audit(1769215298.047:136): prog-id=19 op=LOAD Jan 24 00:41:38.075945 kernel: audit: type=1334 audit(1769215298.047:137): prog-id=20 op=LOAD Jan 24 00:41:38.047000 audit: BPF prog-id=19 op=LOAD Jan 24 00:41:38.047000 audit: BPF prog-id=20 op=LOAD Jan 24 00:41:38.106000 audit: BPF prog-id=21 op=LOAD Jan 24 00:41:38.113134 kernel: audit: type=1334 audit(1769215298.106:138): prog-id=21 op=LOAD Jan 24 00:41:38.113162 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:41:38.133643 kernel: loop4: detected capacity change from 0 to 50784 Jan 24 00:41:38.138468 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:41:38.155000 audit: BPF prog-id=22 op=LOAD Jan 24 00:41:38.164380 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 24 00:41:38.171401 kernel: audit: type=1334 audit(1769215298.155:139): prog-id=22 op=LOAD Jan 24 00:41:38.156000 audit: BPF prog-id=23 op=LOAD Jan 24 00:41:38.156000 audit: BPF prog-id=24 op=LOAD Jan 24 00:41:38.186000 audit: BPF prog-id=25 op=LOAD Jan 24 00:41:38.204000 audit: BPF prog-id=26 op=LOAD Jan 24 00:41:38.204000 audit: BPF prog-id=27 op=LOAD Jan 24 00:41:38.207193 kernel: loop5: detected capacity change from 0 to 111560 Jan 24 00:41:38.208676 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 24 00:41:38.270438 kernel: loop6: detected capacity change from 0 to 219144 Jan 24 00:41:38.925973 (sd-merge)[1288]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 24 00:41:38.988860 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Jan 24 00:41:38.988897 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Jan 24 00:41:39.017315 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:41:39.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:39.026002 (sd-merge)[1288]: Merged extensions into '/usr'. Jan 24 00:41:39.059264 systemd[1]: Reload requested from client PID 1264 ('systemd-sysext') (unit systemd-sysext.service)... Jan 24 00:41:39.059497 systemd[1]: Reloading... Jan 24 00:41:39.100935 systemd-nsresourced[1290]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 24 00:41:39.372259 zram_generator::config[1336]: No configuration found. Jan 24 00:41:39.989383 kernel: hrtimer: interrupt took 5021736 ns Jan 24 00:41:40.096857 systemd-oomd[1286]: No swap; memory pressure usage will be degraded Jan 24 00:41:40.355535 systemd-resolved[1287]: Positive Trust Anchors: Jan 24 00:41:40.359264 systemd-resolved[1287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:41:40.359277 systemd-resolved[1287]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 24 00:41:40.359332 systemd-resolved[1287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:41:40.369016 systemd-resolved[1287]: Defaulting to hostname 'linux'. Jan 24 00:41:40.534399 systemd[1]: Reloading finished in 1473 ms. Jan 24 00:41:40.650987 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 24 00:41:40.739563 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 24 00:41:40.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:40.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:40.752267 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 24 00:41:40.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:40.772210 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:41:40.789506 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 24 00:41:40.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:40.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:40.805389 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 24 00:41:40.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:40.844461 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:41:40.870148 systemd[1]: Starting ensure-sysext.service... Jan 24 00:41:40.898000 audit: BPF prog-id=8 op=UNLOAD Jan 24 00:41:40.898000 audit: BPF prog-id=7 op=UNLOAD Jan 24 00:41:40.899000 audit: BPF prog-id=28 op=LOAD Jan 24 00:41:40.899000 audit: BPF prog-id=29 op=LOAD Jan 24 00:41:40.889429 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:41:40.901357 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:41:40.915000 audit: BPF prog-id=30 op=LOAD Jan 24 00:41:40.915000 audit: BPF prog-id=15 op=UNLOAD Jan 24 00:41:40.916000 audit: BPF prog-id=31 op=LOAD Jan 24 00:41:40.916000 audit: BPF prog-id=32 op=LOAD Jan 24 00:41:40.916000 audit: BPF prog-id=16 op=UNLOAD Jan 24 00:41:40.916000 audit: BPF prog-id=17 op=UNLOAD Jan 24 00:41:40.917000 audit: BPF prog-id=33 op=LOAD Jan 24 00:41:40.917000 audit: BPF prog-id=22 op=UNLOAD Jan 24 00:41:40.921000 audit: BPF prog-id=34 op=LOAD Jan 24 00:41:40.921000 audit: BPF prog-id=35 op=LOAD Jan 24 00:41:40.921000 audit: BPF prog-id=23 op=UNLOAD Jan 24 00:41:40.921000 audit: BPF prog-id=24 op=UNLOAD Jan 24 00:41:40.923000 audit: BPF prog-id=36 op=LOAD Jan 24 00:41:40.923000 audit: BPF prog-id=25 op=UNLOAD Jan 24 00:41:40.923000 audit: BPF prog-id=37 op=LOAD Jan 24 00:41:40.923000 audit: BPF prog-id=38 op=LOAD Jan 24 00:41:40.923000 audit: BPF prog-id=26 op=UNLOAD Jan 24 00:41:40.923000 audit: BPF prog-id=27 op=UNLOAD Jan 24 00:41:40.925000 audit: BPF prog-id=39 op=LOAD Jan 24 00:41:40.925000 audit: BPF prog-id=21 op=UNLOAD Jan 24 00:41:40.926000 audit: BPF prog-id=40 op=LOAD Jan 24 00:41:40.926000 audit: BPF prog-id=18 op=UNLOAD Jan 24 00:41:40.926000 audit: BPF prog-id=41 op=LOAD Jan 24 00:41:40.927000 audit: BPF prog-id=42 op=LOAD Jan 24 00:41:40.927000 audit: BPF prog-id=19 op=UNLOAD Jan 24 00:41:40.927000 audit: BPF prog-id=20 op=UNLOAD Jan 24 00:41:40.937728 systemd-tmpfiles[1373]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 24 00:41:40.937842 systemd-tmpfiles[1373]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 24 00:41:40.938412 systemd-tmpfiles[1373]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 24 00:41:40.939969 systemd[1]: Reload requested from client PID 1372 ('systemctl') (unit ensure-sysext.service)... Jan 24 00:41:40.940132 systemd[1]: Reloading... Jan 24 00:41:40.940726 systemd-tmpfiles[1373]: ACLs are not supported, ignoring. Jan 24 00:41:40.940888 systemd-tmpfiles[1373]: ACLs are not supported, ignoring. Jan 24 00:41:40.972825 systemd-tmpfiles[1373]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:41:40.972842 systemd-tmpfiles[1373]: Skipping /boot Jan 24 00:41:41.016494 systemd-tmpfiles[1373]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:41:41.016672 systemd-tmpfiles[1373]: Skipping /boot Jan 24 00:41:41.096453 systemd-udevd[1374]: Using default interface naming scheme 'v257'. Jan 24 00:41:41.108445 zram_generator::config[1401]: No configuration found. Jan 24 00:41:41.523230 kernel: mousedev: PS/2 mouse device common for all mice Jan 24 00:41:41.617145 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 24 00:41:41.657147 kernel: ACPI: button: Power Button [PWRF] Jan 24 00:41:41.714199 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 24 00:41:41.714717 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 24 00:41:41.751609 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 24 00:41:41.751798 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 24 00:41:41.763737 systemd[1]: Reloading finished in 822 ms. Jan 24 00:41:41.793664 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:41:41.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:41.809670 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:41:41.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:41.827000 audit: BPF prog-id=43 op=LOAD Jan 24 00:41:41.829000 audit: BPF prog-id=44 op=LOAD Jan 24 00:41:41.829000 audit: BPF prog-id=28 op=UNLOAD Jan 24 00:41:41.829000 audit: BPF prog-id=29 op=UNLOAD Jan 24 00:41:41.832000 audit: BPF prog-id=45 op=LOAD Jan 24 00:41:41.832000 audit: BPF prog-id=36 op=UNLOAD Jan 24 00:41:41.832000 audit: BPF prog-id=46 op=LOAD Jan 24 00:41:41.832000 audit: BPF prog-id=47 op=LOAD Jan 24 00:41:41.832000 audit: BPF prog-id=37 op=UNLOAD Jan 24 00:41:41.832000 audit: BPF prog-id=38 op=UNLOAD Jan 24 00:41:41.833000 audit: BPF prog-id=48 op=LOAD Jan 24 00:41:41.834000 audit: BPF prog-id=39 op=UNLOAD Jan 24 00:41:41.837000 audit: BPF prog-id=49 op=LOAD Jan 24 00:41:41.843000 audit: BPF prog-id=40 op=UNLOAD Jan 24 00:41:41.843000 audit: BPF prog-id=50 op=LOAD Jan 24 00:41:41.843000 audit: BPF prog-id=51 op=LOAD Jan 24 00:41:41.843000 audit: BPF prog-id=41 op=UNLOAD Jan 24 00:41:41.843000 audit: BPF prog-id=42 op=UNLOAD Jan 24 00:41:41.846000 audit: BPF prog-id=52 op=LOAD Jan 24 00:41:41.846000 audit: BPF prog-id=30 op=UNLOAD Jan 24 00:41:41.846000 audit: BPF prog-id=53 op=LOAD Jan 24 00:41:41.846000 audit: BPF prog-id=54 op=LOAD Jan 24 00:41:41.848000 audit: BPF prog-id=31 op=UNLOAD Jan 24 00:41:41.848000 audit: BPF prog-id=32 op=UNLOAD Jan 24 00:41:41.848000 audit: BPF prog-id=55 op=LOAD Jan 24 00:41:41.848000 audit: BPF prog-id=33 op=UNLOAD Jan 24 00:41:41.849000 audit: BPF prog-id=56 op=LOAD Jan 24 00:41:41.849000 audit: BPF prog-id=57 op=LOAD Jan 24 00:41:41.849000 audit: BPF prog-id=34 op=UNLOAD Jan 24 00:41:41.849000 audit: BPF prog-id=35 op=UNLOAD Jan 24 00:41:41.905356 systemd[1]: Finished ensure-sysext.service. Jan 24 00:41:41.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:41.979503 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:41:41.986453 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 24 00:41:41.997250 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 24 00:41:42.015793 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:41:42.021397 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:41:42.169529 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:41:42.194956 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:41:42.210443 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:41:42.224760 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:41:42.224964 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 24 00:41:42.230514 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 24 00:41:42.245573 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 24 00:41:42.256192 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 24 00:41:42.278634 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 24 00:41:42.312000 audit: BPF prog-id=58 op=LOAD Jan 24 00:41:42.331000 audit: BPF prog-id=59 op=LOAD Jan 24 00:41:42.317657 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:41:42.341991 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 24 00:41:42.362495 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 24 00:41:42.427449 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:41:42.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:42.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:42.458000 audit[1510]: SYSTEM_BOOT pid=1510 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 24 00:41:42.443607 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:41:42.447556 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:41:42.448246 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:41:42.457803 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:41:42.458390 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:41:42.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:42.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:42.469366 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:41:42.470258 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:41:42.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:42.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:42.480664 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:41:42.481162 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:41:42.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:42.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:42.488909 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 24 00:41:42.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:42.517991 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:41:42.518415 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:41:42.527953 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 24 00:41:42.557173 kernel: kauditd_printk_skb: 87 callbacks suppressed Jan 24 00:41:42.557256 kernel: audit: type=1130 audit(1769215302.544:227): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:42.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:41:42.585195 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 24 00:41:42.588000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 24 00:41:42.599330 augenrules[1529]: No rules Jan 24 00:41:42.638803 kernel: audit: type=1305 audit(1769215302.588:228): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 24 00:41:42.638897 kernel: audit: type=1300 audit(1769215302.588:228): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcfe73d4b0 a2=420 a3=0 items=0 ppid=1486 pid=1529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:41:42.638951 kernel: audit: type=1327 audit(1769215302.588:228): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 24 00:41:42.588000 audit[1529]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcfe73d4b0 a2=420 a3=0 items=0 ppid=1486 pid=1529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:41:42.588000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 24 00:41:42.648574 systemd[1]: audit-rules.service: Deactivated successfully. Jan 24 00:41:42.649375 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 24 00:41:45.469736 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 24 00:41:45.485259 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 24 00:41:47.122002 systemd-networkd[1505]: lo: Link UP Jan 24 00:41:47.122319 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 24 00:41:47.122875 systemd-networkd[1505]: lo: Gained carrier Jan 24 00:41:47.148821 systemd-networkd[1505]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 24 00:41:47.148830 systemd-networkd[1505]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:41:47.152331 systemd-networkd[1505]: eth0: Link UP Jan 24 00:41:47.157692 systemd-networkd[1505]: eth0: Gained carrier Jan 24 00:41:47.157854 systemd-networkd[1505]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 24 00:41:47.249625 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:41:47.250789 systemd[1]: Reached target network.target - Network. Jan 24 00:41:47.302290 systemd-networkd[1505]: eth0: DHCPv4 address 10.0.0.71/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 24 00:41:47.311668 systemd-timesyncd[1507]: Network configuration changed, trying to establish connection. Jan 24 00:41:47.325523 systemd[1]: Reached target time-set.target - System Time Set. Jan 24 00:41:47.939915 systemd-timesyncd[1507]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 24 00:41:47.940059 systemd-timesyncd[1507]: Initial clock synchronization to Sat 2026-01-24 00:41:47.939741 UTC. Jan 24 00:41:47.944651 systemd-resolved[1287]: Clock change detected. Flushing caches. Jan 24 00:41:47.969939 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 24 00:41:47.999352 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 24 00:41:48.032380 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:41:48.157790 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 24 00:41:49.147594 systemd-networkd[1505]: eth0: Gained IPv6LL Jan 24 00:41:49.158755 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 24 00:41:49.170808 systemd[1]: Reached target network-online.target - Network is Online. Jan 24 00:41:49.212371 kernel: kvm_amd: TSC scaling supported Jan 24 00:41:49.212619 kernel: kvm_amd: Nested Virtualization enabled Jan 24 00:41:49.212650 kernel: kvm_amd: Nested Paging enabled Jan 24 00:41:49.225113 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 24 00:41:49.225362 kernel: kvm_amd: PMU virtualization is disabled Jan 24 00:41:49.552093 ldconfig[1498]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 24 00:41:49.580934 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 24 00:41:49.607396 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 24 00:41:49.816123 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 24 00:41:49.837076 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:41:49.846493 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 24 00:41:49.862043 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 24 00:41:49.886545 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 24 00:41:49.901833 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 24 00:41:49.915498 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 24 00:41:49.925902 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 24 00:41:49.949625 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 24 00:41:49.972564 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 24 00:41:50.001018 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 24 00:41:50.001615 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:41:50.029913 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:41:50.031572 kernel: EDAC MC: Ver: 3.0.0 Jan 24 00:41:50.062575 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 24 00:41:50.075855 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 24 00:41:50.090471 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 24 00:41:50.105965 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 24 00:41:50.128634 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 24 00:41:50.163672 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 24 00:41:50.179689 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 24 00:41:50.192765 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 24 00:41:50.205540 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:41:50.224486 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:41:50.238625 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:41:50.238737 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:41:50.242690 systemd[1]: Starting containerd.service - containerd container runtime... Jan 24 00:41:50.261593 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 24 00:41:50.299122 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 24 00:41:50.325597 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 24 00:41:50.352616 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 24 00:41:50.369576 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 24 00:41:50.381412 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 24 00:41:50.401581 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 24 00:41:50.429729 jq[1558]: false Jan 24 00:41:50.439740 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:41:50.451516 extend-filesystems[1559]: Found /dev/vda6 Jan 24 00:41:50.472742 extend-filesystems[1559]: Found /dev/vda9 Jan 24 00:41:50.491659 extend-filesystems[1559]: Checking size of /dev/vda9 Jan 24 00:41:50.491524 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 24 00:41:50.499351 oslogin_cache_refresh[1560]: Refreshing passwd entry cache Jan 24 00:41:50.524300 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Refreshing passwd entry cache Jan 24 00:41:50.522516 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 24 00:41:50.538397 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 24 00:41:50.542398 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Failure getting users, quitting Jan 24 00:41:50.542398 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 24 00:41:50.542398 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Refreshing group entry cache Jan 24 00:41:50.541464 oslogin_cache_refresh[1560]: Failure getting users, quitting Jan 24 00:41:50.541487 oslogin_cache_refresh[1560]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 24 00:41:50.541539 oslogin_cache_refresh[1560]: Refreshing group entry cache Jan 24 00:41:50.543556 extend-filesystems[1559]: Resized partition /dev/vda9 Jan 24 00:41:50.605696 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 24 00:41:50.558763 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 24 00:41:50.589713 oslogin_cache_refresh[1560]: Failure getting groups, quitting Jan 24 00:41:50.605920 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Failure getting groups, quitting Jan 24 00:41:50.605920 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 24 00:41:50.605981 extend-filesystems[1576]: resize2fs 1.47.3 (8-Jul-2025) Jan 24 00:41:50.589734 oslogin_cache_refresh[1560]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 24 00:41:50.621063 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 24 00:41:50.652621 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 24 00:41:50.673025 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 24 00:41:50.674036 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 24 00:41:50.680667 systemd[1]: Starting update-engine.service - Update Engine... Jan 24 00:41:50.711739 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 24 00:41:50.728602 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 24 00:41:50.738382 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 24 00:41:50.771598 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 24 00:41:50.797793 jq[1590]: true Jan 24 00:41:50.798480 update_engine[1586]: I20260124 00:41:50.784102 1586 main.cc:92] Flatcar Update Engine starting Jan 24 00:41:50.778693 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 24 00:41:50.779503 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 24 00:41:50.780008 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 24 00:41:50.803050 systemd[1]: motdgen.service: Deactivated successfully. Jan 24 00:41:50.805618 extend-filesystems[1576]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 24 00:41:50.805618 extend-filesystems[1576]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 24 00:41:50.805618 extend-filesystems[1576]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 24 00:41:50.806005 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 24 00:41:50.870482 extend-filesystems[1559]: Resized filesystem in /dev/vda9 Jan 24 00:41:50.828520 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 24 00:41:50.830332 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 24 00:41:50.885018 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 24 00:41:50.896485 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 24 00:41:50.896962 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 24 00:41:50.960966 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 24 00:41:50.963490 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 24 00:41:50.980251 jq[1606]: true Jan 24 00:41:51.032977 tar[1604]: linux-amd64/LICENSE Jan 24 00:41:51.032977 tar[1604]: linux-amd64/helm Jan 24 00:41:51.065338 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 24 00:41:51.173027 dbus-daemon[1556]: [system] SELinux support is enabled Jan 24 00:41:51.173589 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 24 00:41:51.191813 update_engine[1586]: I20260124 00:41:51.191526 1586 update_check_scheduler.cc:74] Next update check in 3m29s Jan 24 00:41:51.196526 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 24 00:41:51.196566 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 24 00:41:51.208051 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 24 00:41:51.208320 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 24 00:41:51.221354 systemd-logind[1585]: Watching system buttons on /dev/input/event2 (Power Button) Jan 24 00:41:51.221398 systemd-logind[1585]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 24 00:41:51.221920 systemd[1]: Started update-engine.service - Update Engine. Jan 24 00:41:51.223461 systemd-logind[1585]: New seat seat0. Jan 24 00:41:51.232590 sshd_keygen[1592]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 24 00:41:51.239821 systemd[1]: Started systemd-logind.service - User Login Management. Jan 24 00:41:51.259670 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 24 00:41:51.279015 bash[1641]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:41:51.284539 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 24 00:41:51.298054 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 24 00:41:51.332364 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 24 00:41:51.350529 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 24 00:41:51.415912 systemd[1]: issuegen.service: Deactivated successfully. Jan 24 00:41:51.417639 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 24 00:41:51.442900 locksmithd[1650]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 24 00:41:51.442915 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 24 00:41:51.507687 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 24 00:41:51.535441 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 24 00:41:51.552994 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 24 00:41:51.568635 systemd[1]: Reached target getty.target - Login Prompts. Jan 24 00:41:51.682829 containerd[1607]: time="2026-01-24T00:41:51Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 24 00:41:51.685360 containerd[1607]: time="2026-01-24T00:41:51.684427543Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 24 00:41:51.727474 containerd[1607]: time="2026-01-24T00:41:51.726332467Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.82µs" Jan 24 00:41:51.727474 containerd[1607]: time="2026-01-24T00:41:51.726434548Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 24 00:41:51.727474 containerd[1607]: time="2026-01-24T00:41:51.726491293Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 24 00:41:51.727474 containerd[1607]: time="2026-01-24T00:41:51.726507945Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 24 00:41:51.728633 containerd[1607]: time="2026-01-24T00:41:51.728548093Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 24 00:41:51.728633 containerd[1607]: time="2026-01-24T00:41:51.728628714Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 24 00:41:51.728846 containerd[1607]: time="2026-01-24T00:41:51.728723260Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 24 00:41:51.728846 containerd[1607]: time="2026-01-24T00:41:51.728804532Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 24 00:41:51.734495 containerd[1607]: time="2026-01-24T00:41:51.734436076Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 24 00:41:51.734495 containerd[1607]: time="2026-01-24T00:41:51.734462796Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 24 00:41:51.734495 containerd[1607]: time="2026-01-24T00:41:51.734480620Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 24 00:41:51.734495 containerd[1607]: time="2026-01-24T00:41:51.734492722Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 24 00:41:51.734889 containerd[1607]: time="2026-01-24T00:41:51.734804975Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 24 00:41:51.734889 containerd[1607]: time="2026-01-24T00:41:51.734884213Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 24 00:41:51.735098 containerd[1607]: time="2026-01-24T00:41:51.735021078Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 24 00:41:51.735899 containerd[1607]: time="2026-01-24T00:41:51.735567569Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 24 00:41:51.735899 containerd[1607]: time="2026-01-24T00:41:51.735613635Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 24 00:41:51.735899 containerd[1607]: time="2026-01-24T00:41:51.735628793Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 24 00:41:51.735899 containerd[1607]: time="2026-01-24T00:41:51.735859774Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 24 00:41:51.737509 containerd[1607]: time="2026-01-24T00:41:51.737283362Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 24 00:41:51.737509 containerd[1607]: time="2026-01-24T00:41:51.737417853Z" level=info msg="metadata content store policy set" policy=shared Jan 24 00:41:51.762458 containerd[1607]: time="2026-01-24T00:41:51.761439858Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 24 00:41:51.762458 containerd[1607]: time="2026-01-24T00:41:51.761720261Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 24 00:41:51.762458 containerd[1607]: time="2026-01-24T00:41:51.761837470Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 24 00:41:51.762458 containerd[1607]: time="2026-01-24T00:41:51.761864861Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 24 00:41:51.762458 containerd[1607]: time="2026-01-24T00:41:51.761882614Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 24 00:41:51.762458 containerd[1607]: time="2026-01-24T00:41:51.761909474Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 24 00:41:51.762458 containerd[1607]: time="2026-01-24T00:41:51.761924573Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 24 00:41:51.762458 containerd[1607]: time="2026-01-24T00:41:51.761942376Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 24 00:41:51.762458 containerd[1607]: time="2026-01-24T00:41:51.761958365Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 24 00:41:51.762458 containerd[1607]: time="2026-01-24T00:41:51.761973133Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 24 00:41:51.762458 containerd[1607]: time="2026-01-24T00:41:51.761986217Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 24 00:41:51.762458 containerd[1607]: time="2026-01-24T00:41:51.762004041Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 24 00:41:51.762458 containerd[1607]: time="2026-01-24T00:41:51.762020762Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 24 00:41:51.762458 containerd[1607]: time="2026-01-24T00:41:51.762037303Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 24 00:41:51.762973 containerd[1607]: time="2026-01-24T00:41:51.762528630Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 24 00:41:51.762973 containerd[1607]: time="2026-01-24T00:41:51.762557625Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 24 00:41:51.762973 containerd[1607]: time="2026-01-24T00:41:51.762575788Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 24 00:41:51.762973 containerd[1607]: time="2026-01-24T00:41:51.762590847Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 24 00:41:51.762973 containerd[1607]: time="2026-01-24T00:41:51.762604753Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 24 00:41:51.762973 containerd[1607]: time="2026-01-24T00:41:51.762617206Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 24 00:41:51.762973 containerd[1607]: time="2026-01-24T00:41:51.762632063Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 24 00:41:51.762973 containerd[1607]: time="2026-01-24T00:41:51.762650398Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 24 00:41:51.762973 containerd[1607]: time="2026-01-24T00:41:51.762691104Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 24 00:41:51.762973 containerd[1607]: time="2026-01-24T00:41:51.762708937Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 24 00:41:51.762973 containerd[1607]: time="2026-01-24T00:41:51.762722302Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 24 00:41:51.762973 containerd[1607]: time="2026-01-24T00:41:51.762750344Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 24 00:41:51.762973 containerd[1607]: time="2026-01-24T00:41:51.762809074Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 24 00:41:51.762973 containerd[1607]: time="2026-01-24T00:41:51.762825575Z" level=info msg="Start snapshots syncer" Jan 24 00:41:51.767823 containerd[1607]: time="2026-01-24T00:41:51.763521875Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 24 00:41:51.767823 containerd[1607]: time="2026-01-24T00:41:51.763853554Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 24 00:41:51.768344 containerd[1607]: time="2026-01-24T00:41:51.764082251Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 24 00:41:51.768344 containerd[1607]: time="2026-01-24T00:41:51.764666902Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 24 00:41:51.768344 containerd[1607]: time="2026-01-24T00:41:51.766389068Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 24 00:41:51.768344 containerd[1607]: time="2026-01-24T00:41:51.766420807Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 24 00:41:51.768344 containerd[1607]: time="2026-01-24T00:41:51.766437067Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 24 00:41:51.768344 containerd[1607]: time="2026-01-24T00:41:51.766452145Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 24 00:41:51.768344 containerd[1607]: time="2026-01-24T00:41:51.766468506Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 24 00:41:51.768344 containerd[1607]: time="2026-01-24T00:41:51.766482342Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 24 00:41:51.768344 containerd[1607]: time="2026-01-24T00:41:51.766496779Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 24 00:41:51.768344 containerd[1607]: time="2026-01-24T00:41:51.766512528Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 24 00:41:51.768344 containerd[1607]: time="2026-01-24T00:41:51.766526404Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 24 00:41:51.768344 containerd[1607]: time="2026-01-24T00:41:51.766994618Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 24 00:41:51.768344 containerd[1607]: time="2026-01-24T00:41:51.767023933Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 24 00:41:51.768344 containerd[1607]: time="2026-01-24T00:41:51.767038901Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 24 00:41:51.768724 containerd[1607]: time="2026-01-24T00:41:51.767053147Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 24 00:41:51.768724 containerd[1607]: time="2026-01-24T00:41:51.767066572Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 24 00:41:51.768724 containerd[1607]: time="2026-01-24T00:41:51.767083093Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 24 00:41:51.768724 containerd[1607]: time="2026-01-24T00:41:51.767096037Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 24 00:41:51.768724 containerd[1607]: time="2026-01-24T00:41:51.767110975Z" level=info msg="runtime interface created" Jan 24 00:41:51.768724 containerd[1607]: time="2026-01-24T00:41:51.767119952Z" level=info msg="created NRI interface" Jan 24 00:41:51.768724 containerd[1607]: time="2026-01-24T00:41:51.767364469Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 24 00:41:51.768724 containerd[1607]: time="2026-01-24T00:41:51.767388223Z" level=info msg="Connect containerd service" Jan 24 00:41:51.768724 containerd[1607]: time="2026-01-24T00:41:51.767415564Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 24 00:41:51.768961 containerd[1607]: time="2026-01-24T00:41:51.768918089Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:41:51.879312 tar[1604]: linux-amd64/README.md Jan 24 00:41:51.919581 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 24 00:41:51.991434 containerd[1607]: time="2026-01-24T00:41:51.988816930Z" level=info msg="Start subscribing containerd event" Jan 24 00:41:51.991434 containerd[1607]: time="2026-01-24T00:41:51.989779867Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 24 00:41:51.991434 containerd[1607]: time="2026-01-24T00:41:51.989852603Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 24 00:41:51.991853 containerd[1607]: time="2026-01-24T00:41:51.991806431Z" level=info msg="Start recovering state" Jan 24 00:41:51.996854 containerd[1607]: time="2026-01-24T00:41:51.996826954Z" level=info msg="Start event monitor" Jan 24 00:41:51.997349 containerd[1607]: time="2026-01-24T00:41:51.996943542Z" level=info msg="Start cni network conf syncer for default" Jan 24 00:41:51.997349 containerd[1607]: time="2026-01-24T00:41:51.996960152Z" level=info msg="Start streaming server" Jan 24 00:41:51.997349 containerd[1607]: time="2026-01-24T00:41:51.996971814Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 24 00:41:51.997349 containerd[1607]: time="2026-01-24T00:41:51.996987373Z" level=info msg="runtime interface starting up..." Jan 24 00:41:51.997349 containerd[1607]: time="2026-01-24T00:41:51.996995599Z" level=info msg="starting plugins..." Jan 24 00:41:51.997349 containerd[1607]: time="2026-01-24T00:41:51.997015636Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 24 00:41:51.998431 containerd[1607]: time="2026-01-24T00:41:51.997700935Z" level=info msg="containerd successfully booted in 0.316127s" Jan 24 00:41:51.997967 systemd[1]: Started containerd.service - containerd container runtime. Jan 24 00:41:53.326910 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:41:53.358641 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 24 00:41:53.374965 systemd[1]: Startup finished in 13.134s (kernel) + 17.679s (initrd) + 20.580s (userspace) = 51.393s. Jan 24 00:41:53.379687 (kubelet)[1694]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:41:56.284694 kubelet[1694]: E0124 00:41:56.282910 1694 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:41:56.308319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:41:56.308724 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:41:56.312046 systemd[1]: kubelet.service: Consumed 2.482s CPU time, 258M memory peak. Jan 24 00:41:58.319908 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 24 00:41:58.349889 systemd[1]: Started sshd@0-10.0.0.71:22-10.0.0.1:51828.service - OpenSSH per-connection server daemon (10.0.0.1:51828). Jan 24 00:41:58.730347 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 51828 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:41:58.740855 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:41:58.780524 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 24 00:41:58.785537 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 24 00:41:58.800314 systemd-logind[1585]: New session 1 of user core. Jan 24 00:41:58.844802 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 24 00:41:58.851958 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 24 00:41:58.910731 (systemd)[1714]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:41:58.922513 systemd-logind[1585]: New session 2 of user core. Jan 24 00:41:59.245081 systemd[1714]: Queued start job for default target default.target. Jan 24 00:41:59.275945 systemd[1714]: Created slice app.slice - User Application Slice. Jan 24 00:41:59.276055 systemd[1714]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 24 00:41:59.276076 systemd[1714]: Reached target paths.target - Paths. Jan 24 00:41:59.276475 systemd[1714]: Reached target timers.target - Timers. Jan 24 00:41:59.283934 systemd[1714]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 24 00:41:59.291639 systemd[1714]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 24 00:41:59.354918 systemd[1714]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 24 00:41:59.355736 systemd[1714]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 24 00:41:59.356035 systemd[1714]: Reached target sockets.target - Sockets. Jan 24 00:41:59.356115 systemd[1714]: Reached target basic.target - Basic System. Jan 24 00:41:59.356364 systemd[1714]: Reached target default.target - Main User Target. Jan 24 00:41:59.357986 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 24 00:41:59.359609 systemd[1714]: Startup finished in 416ms. Jan 24 00:41:59.388370 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 24 00:41:59.459882 systemd[1]: Started sshd@1-10.0.0.71:22-10.0.0.1:51830.service - OpenSSH per-connection server daemon (10.0.0.1:51830). Jan 24 00:41:59.607327 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 51830 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:41:59.608944 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:41:59.627053 systemd-logind[1585]: New session 3 of user core. Jan 24 00:41:59.636720 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 24 00:41:59.670806 sshd[1732]: Connection closed by 10.0.0.1 port 51830 Jan 24 00:41:59.674682 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Jan 24 00:41:59.691561 systemd[1]: sshd@1-10.0.0.71:22-10.0.0.1:51830.service: Deactivated successfully. Jan 24 00:41:59.696985 systemd[1]: session-3.scope: Deactivated successfully. Jan 24 00:41:59.701987 systemd-logind[1585]: Session 3 logged out. Waiting for processes to exit. Jan 24 00:41:59.706489 systemd[1]: Started sshd@2-10.0.0.71:22-10.0.0.1:51840.service - OpenSSH per-connection server daemon (10.0.0.1:51840). Jan 24 00:41:59.715447 systemd-logind[1585]: Removed session 3. Jan 24 00:41:59.826500 sshd[1738]: Accepted publickey for core from 10.0.0.1 port 51840 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:41:59.832615 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:41:59.855850 systemd-logind[1585]: New session 4 of user core. Jan 24 00:41:59.879664 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 24 00:41:59.905104 sshd[1743]: Connection closed by 10.0.0.1 port 51840 Jan 24 00:41:59.905576 sshd-session[1738]: pam_unix(sshd:session): session closed for user core Jan 24 00:41:59.922283 systemd[1]: sshd@2-10.0.0.71:22-10.0.0.1:51840.service: Deactivated successfully. Jan 24 00:41:59.926698 systemd[1]: session-4.scope: Deactivated successfully. Jan 24 00:41:59.932386 systemd-logind[1585]: Session 4 logged out. Waiting for processes to exit. Jan 24 00:41:59.942596 systemd-logind[1585]: Removed session 4. Jan 24 00:41:59.946761 systemd[1]: Started sshd@3-10.0.0.71:22-10.0.0.1:51848.service - OpenSSH per-connection server daemon (10.0.0.1:51848). Jan 24 00:42:00.076902 sshd[1749]: Accepted publickey for core from 10.0.0.1 port 51848 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:42:00.080674 sshd-session[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:42:00.093847 systemd-logind[1585]: New session 5 of user core. Jan 24 00:42:00.119579 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 24 00:42:00.171063 sshd[1753]: Connection closed by 10.0.0.1 port 51848 Jan 24 00:42:00.175049 sshd-session[1749]: pam_unix(sshd:session): session closed for user core Jan 24 00:42:00.191644 systemd[1]: sshd@3-10.0.0.71:22-10.0.0.1:51848.service: Deactivated successfully. Jan 24 00:42:00.194759 systemd[1]: session-5.scope: Deactivated successfully. Jan 24 00:42:00.198631 systemd-logind[1585]: Session 5 logged out. Waiting for processes to exit. Jan 24 00:42:00.202680 systemd[1]: Started sshd@4-10.0.0.71:22-10.0.0.1:51860.service - OpenSSH per-connection server daemon (10.0.0.1:51860). Jan 24 00:42:00.204860 systemd-logind[1585]: Removed session 5. Jan 24 00:42:00.342404 sshd[1759]: Accepted publickey for core from 10.0.0.1 port 51860 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:42:00.345833 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:42:00.373839 systemd-logind[1585]: New session 6 of user core. Jan 24 00:42:00.393690 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 24 00:42:00.490769 sudo[1764]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 24 00:42:00.491513 sudo[1764]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:42:00.519507 sudo[1764]: pam_unix(sudo:session): session closed for user root Jan 24 00:42:00.523419 sshd[1763]: Connection closed by 10.0.0.1 port 51860 Jan 24 00:42:00.526817 sshd-session[1759]: pam_unix(sshd:session): session closed for user core Jan 24 00:42:00.553779 systemd[1]: Started sshd@5-10.0.0.71:22-10.0.0.1:51876.service - OpenSSH per-connection server daemon (10.0.0.1:51876). Jan 24 00:42:00.559718 systemd[1]: sshd@4-10.0.0.71:22-10.0.0.1:51860.service: Deactivated successfully. Jan 24 00:42:00.559750 systemd-logind[1585]: Session 6 logged out. Waiting for processes to exit. Jan 24 00:42:00.565627 systemd[1]: session-6.scope: Deactivated successfully. Jan 24 00:42:00.580947 systemd-logind[1585]: Removed session 6. Jan 24 00:42:00.722679 sshd[1768]: Accepted publickey for core from 10.0.0.1 port 51876 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:42:00.728431 sshd-session[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:42:00.748611 systemd-logind[1585]: New session 7 of user core. Jan 24 00:42:00.759711 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 24 00:42:00.810855 sudo[1777]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 24 00:42:00.811527 sudo[1777]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:42:00.838340 sudo[1777]: pam_unix(sudo:session): session closed for user root Jan 24 00:42:00.859017 sudo[1776]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 24 00:42:00.860962 sudo[1776]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:42:00.916615 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 24 00:42:01.133000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 24 00:42:01.141921 augenrules[1801]: No rules Jan 24 00:42:01.147975 systemd[1]: audit-rules.service: Deactivated successfully. Jan 24 00:42:01.148647 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 24 00:42:01.158752 sudo[1776]: pam_unix(sudo:session): session closed for user root Jan 24 00:42:01.165610 sshd[1775]: Connection closed by 10.0.0.1 port 51876 Jan 24 00:42:01.166111 sshd-session[1768]: pam_unix(sshd:session): session closed for user core Jan 24 00:42:01.133000 audit[1801]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc3f143fc0 a2=420 a3=0 items=0 ppid=1782 pid=1801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:01.233770 kernel: audit: type=1305 audit(1769215321.133:229): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 24 00:42:01.238077 kernel: audit: type=1300 audit(1769215321.133:229): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc3f143fc0 a2=420 a3=0 items=0 ppid=1782 pid=1801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:01.238124 kernel: audit: type=1327 audit(1769215321.133:229): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 24 00:42:01.133000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 24 00:42:01.259607 kernel: audit: type=1130 audit(1769215321.150:230): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:42:01.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:42:01.283875 systemd[1]: sshd@5-10.0.0.71:22-10.0.0.1:51876.service: Deactivated successfully. Jan 24 00:42:01.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:42:01.301749 systemd[1]: session-7.scope: Deactivated successfully. Jan 24 00:42:01.307925 systemd-logind[1585]: Session 7 logged out. Waiting for processes to exit. Jan 24 00:42:01.317755 kernel: audit: type=1131 audit(1769215321.150:231): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:42:01.317818 kernel: audit: type=1106 audit(1769215321.155:232): pid=1776 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:42:01.155000 audit[1776]: USER_END pid=1776 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:42:01.340332 kernel: audit: type=1104 audit(1769215321.155:233): pid=1776 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:42:01.155000 audit[1776]: CRED_DISP pid=1776 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:42:01.332712 systemd[1]: Started sshd@6-10.0.0.71:22-10.0.0.1:51888.service - OpenSSH per-connection server daemon (10.0.0.1:51888). Jan 24 00:42:01.345870 systemd-logind[1585]: Removed session 7. Jan 24 00:42:01.174000 audit[1768]: USER_END pid=1768 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:42:01.386888 kernel: audit: type=1106 audit(1769215321.174:234): pid=1768 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:42:01.174000 audit[1768]: CRED_DISP pid=1768 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:42:01.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.71:22-10.0.0.1:51876 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:42:01.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.71:22-10.0.0.1:51888 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:42:01.436619 kernel: audit: type=1104 audit(1769215321.174:235): pid=1768 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:42:01.436768 kernel: audit: type=1131 audit(1769215321.287:236): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.71:22-10.0.0.1:51876 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:42:01.674405 sshd[1810]: Accepted publickey for core from 10.0.0.1 port 51888 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:42:01.671000 audit[1810]: USER_ACCT pid=1810 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:42:01.685000 audit[1810]: CRED_ACQ pid=1810 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:42:01.686000 audit[1810]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff6d15dca0 a2=3 a3=0 items=0 ppid=1 pid=1810 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:01.686000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:42:01.688694 sshd-session[1810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:42:01.722739 systemd-logind[1585]: New session 8 of user core. Jan 24 00:42:01.739957 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 24 00:42:01.750000 audit[1810]: USER_START pid=1810 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:42:01.772000 audit[1814]: CRED_ACQ pid=1814 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:42:01.820773 sudo[1815]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 24 00:42:01.815000 audit[1815]: USER_ACCT pid=1815 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:42:01.835012 sudo[1815]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:42:01.828000 audit[1815]: CRED_REFR pid=1815 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:42:01.835000 audit[1815]: USER_START pid=1815 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:42:04.100810 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 24 00:42:04.157107 (dockerd)[1836]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 24 00:42:06.406643 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 24 00:42:06.463044 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:42:07.275925 dockerd[1836]: time="2026-01-24T00:42:07.272408310Z" level=info msg="Starting up" Jan 24 00:42:07.283407 dockerd[1836]: time="2026-01-24T00:42:07.282846157Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 24 00:42:07.588851 dockerd[1836]: time="2026-01-24T00:42:07.583733534Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 24 00:42:07.853873 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:42:07.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:42:07.876865 kernel: kauditd_printk_skb: 11 callbacks suppressed Jan 24 00:42:07.877683 kernel: audit: type=1130 audit(1769215327.857:246): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:42:07.918844 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport581160943-merged.mount: Deactivated successfully. Jan 24 00:42:07.921571 (kubelet)[1868]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:42:08.242804 dockerd[1836]: time="2026-01-24T00:42:08.220717335Z" level=info msg="Loading containers: start." Jan 24 00:42:08.409454 kernel: Initializing XFRM netlink socket Jan 24 00:42:08.609247 kubelet[1868]: E0124 00:42:08.606051 1868 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:42:08.650095 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:42:08.655069 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:42:08.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:42:08.658477 systemd[1]: kubelet.service: Consumed 822ms CPU time, 110.6M memory peak. Jan 24 00:42:08.689572 kernel: audit: type=1131 audit(1769215328.656:247): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:42:09.947000 audit[1907]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1907 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:42:09.947000 audit[1907]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc9e8c0980 a2=0 a3=0 items=0 ppid=1836 pid=1907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:10.090677 kernel: audit: type=1325 audit(1769215329.947:248): table=nat:2 family=2 entries=2 op=nft_register_chain pid=1907 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:42:10.091411 kernel: audit: type=1300 audit(1769215329.947:248): arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc9e8c0980 a2=0 a3=0 items=0 ppid=1836 pid=1907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:10.091464 kernel: audit: type=1327 audit(1769215329.947:248): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 24 00:42:09.947000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 24 00:42:10.129876 kernel: audit: type=1325 audit(1769215330.011:249): table=filter:3 family=2 entries=2 op=nft_register_chain pid=1909 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:42:10.011000 audit[1909]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1909 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:42:10.011000 audit[1909]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffeba660890 a2=0 a3=0 items=0 ppid=1836 pid=1909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:10.195022 kernel: audit: type=1300 audit(1769215330.011:249): arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffeba660890 a2=0 a3=0 items=0 ppid=1836 pid=1909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:10.198968 kernel: audit: type=1327 audit(1769215330.011:249): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 24 00:42:10.011000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 24 00:42:10.095000 audit[1911]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1911 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:42:10.095000 audit[1911]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcc6bbc840 a2=0 a3=0 items=0 ppid=1836 pid=1911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:10.301022 kernel: audit: type=1325 audit(1769215330.095:250): table=filter:4 family=2 entries=1 op=nft_register_chain pid=1911 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:42:10.301974 kernel: audit: type=1300 audit(1769215330.095:250): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcc6bbc840 a2=0 a3=0 items=0 ppid=1836 pid=1911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:10.095000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 24 00:42:10.186000 audit[1913]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1913 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:42:10.186000 audit[1913]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd8bb558a0 a2=0 a3=0 items=0 ppid=1836 pid=1913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:10.186000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 24 00:42:10.274000 audit[1915]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1915 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:42:10.274000 audit[1915]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff3a5cf040 a2=0 a3=0 items=0 ppid=1836 pid=1915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:10.274000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 24 00:42:10.378000 audit[1917]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1917 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:42:10.378000 audit[1917]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffee9e409b0 a2=0 a3=0 items=0 ppid=1836 pid=1917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:10.378000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 24 00:42:10.411000 audit[1919]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1919 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:42:10.411000 audit[1919]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffca8bbc0c0 a2=0 a3=0 items=0 ppid=1836 pid=1919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:10.411000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 24 00:42:10.483000 audit[1921]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1921 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:42:10.483000 audit[1921]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffdbede35b0 a2=0 a3=0 items=0 ppid=1836 pid=1921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:10.483000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 24 00:42:11.067000 audit[1924]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1924 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:42:11.067000 audit[1924]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7fff2e7e9ab0 a2=0 a3=0 items=0 ppid=1836 pid=1924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:11.067000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 24 00:42:11.151000 audit[1926]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1926 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:42:11.151000 audit[1926]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffcf6de1d90 a2=0 a3=0 items=0 ppid=1836 pid=1926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:11.151000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 24 00:42:11.191000 audit[1928]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1928 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:42:11.191000 audit[1928]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffd9a108eb0 a2=0 a3=0 items=0 ppid=1836 pid=1928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:11.191000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 24 00:42:11.304000 audit[1930]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1930 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:42:11.304000 audit[1930]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffe8639cac0 a2=0 a3=0 items=0 ppid=1836 pid=1930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:11.304000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 24 00:42:11.366000 audit[1932]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1932 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:42:11.366000 audit[1932]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffee8e3c950 a2=0 a3=0 items=0 ppid=1836 pid=1932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:11.366000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 24 00:42:12.366000 audit[1962]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1962 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:42:12.366000 audit[1962]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe0c1415a0 a2=0 a3=0 items=0 ppid=1836 pid=1962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:12.366000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 24 00:42:12.405000 audit[1964]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1964 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:42:12.405000 audit[1964]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffcbe506880 a2=0 a3=0 items=0 ppid=1836 pid=1964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:12.405000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 24 00:42:12.483000 audit[1966]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1966 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:42:12.483000 audit[1966]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdfbac70f0 a2=0 a3=0 items=0 ppid=1836 pid=1966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:12.483000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 24 00:42:12.543000 audit[1968]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1968 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:42:12.543000 audit[1968]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd6cb5c900 a2=0 a3=0 items=0 ppid=1836 pid=1968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:12.543000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 24 00:42:12.571000 audit[1970]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1970 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:42:12.571000 audit[1970]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdbd82b150 a2=0 a3=0 items=0 ppid=1836 pid=1970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:12.571000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 24 00:42:12.593000 audit[1972]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1972 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:42:12.593000 audit[1972]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd7713c470 a2=0 a3=0 items=0 ppid=1836 pid=1972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:12.593000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 24 00:42:12.610000 audit[1974]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1974 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:42:12.610000 audit[1974]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd03be3750 a2=0 a3=0 items=0 ppid=1836 pid=1974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:12.610000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 24 00:42:12.664000 audit[1976]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1976 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:42:12.664000 audit[1976]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffea86f04a0 a2=0 a3=0 items=0 ppid=1836 pid=1976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:12.664000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 24 00:42:12.751000 audit[1978]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1978 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:42:12.751000 audit[1978]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffdc4a7c780 a2=0 a3=0 items=0 ppid=1836 pid=1978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:12.751000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 24 00:42:12.806000 audit[1980]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=1980 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:42:12.806000 audit[1980]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff2eed9340 a2=0 a3=0 items=0 ppid=1836 pid=1980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:12.806000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 24 00:42:12.890000 audit[1982]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=1982 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:42:12.917954 kernel: kauditd_printk_skb: 61 callbacks suppressed Jan 24 00:42:12.918068 kernel: audit: type=1325 audit(1769215332.890:271): table=filter:25 family=10 entries=1 op=nft_register_rule pid=1982 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:42:12.890000 audit[1982]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fffd4941e10 a2=0 a3=0 items=0 ppid=1836 pid=1982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:12.890000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 24 00:42:13.071915 kernel: audit: type=1300 audit(1769215332.890:271): arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fffd4941e10 a2=0 a3=0 items=0 ppid=1836 pid=1982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:13.073637 kernel: audit: type=1327 audit(1769215332.890:271): proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 24 00:42:13.073709 kernel: audit: type=1325 audit(1769215332.991:272): table=filter:26 family=10 entries=1 op=nft_register_rule pid=1984 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:42:12.991000 audit[1984]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=1984 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:42:13.099767 kernel: audit: type=1300 audit(1769215332.991:272): arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffc1b3e8ba0 a2=0 a3=0 items=0 ppid=1836 pid=1984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:12.991000 audit[1984]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffc1b3e8ba0 a2=0 a3=0 items=0 ppid=1836 pid=1984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:12.991000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 24 00:42:13.210791 kernel: audit: type=1327 audit(1769215332.991:272): proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 24 00:42:13.214943 kernel: audit: type=1325 audit(1769215333.084:273): table=filter:27 family=10 entries=1 op=nft_register_rule pid=1986 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:42:13.084000 audit[1986]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=1986 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:42:13.226443 kernel: audit: type=1300 audit(1769215333.084:273): arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffeee172c90 a2=0 a3=0 items=0 ppid=1836 pid=1986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:13.084000 audit[1986]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffeee172c90 a2=0 a3=0 items=0 ppid=1836 pid=1986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:13.084000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 24 00:42:13.213000 audit[1991]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1991 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:42:13.311514 kernel: audit: type=1327 audit(1769215333.084:273): proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 24 00:42:13.312877 kernel: audit: type=1325 audit(1769215333.213:274): table=filter:28 family=2 entries=1 op=nft_register_chain pid=1991 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:42:13.213000 audit[1991]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe9570be40 a2=0 a3=0 items=0 ppid=1836 pid=1991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:13.213000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 24 00:42:13.271000 audit[1993]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1993 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:42:13.271000 audit[1993]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffc462d43a0 a2=0 a3=0 items=0 ppid=1836 pid=1993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:13.271000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 24 00:42:13.295000 audit[1995]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1995 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:42:13.295000 audit[1995]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe577e9fd0 a2=0 a3=0 items=0 ppid=1836 pid=1995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:13.295000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 24 00:42:13.312000 audit[1997]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=1997 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:42:13.312000 audit[1997]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdbae0e0c0 a2=0 a3=0 items=0 ppid=1836 pid=1997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:13.312000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 24 00:42:13.357000 audit[1999]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=1999 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:42:13.357000 audit[1999]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fffc6620300 a2=0 a3=0 items=0 ppid=1836 pid=1999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:13.357000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 24 00:42:13.376000 audit[2001]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2001 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:42:13.376000 audit[2001]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffecf0f49a0 a2=0 a3=0 items=0 ppid=1836 pid=2001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:13.376000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 24 00:42:13.695000 audit[2005]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2005 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:42:13.695000 audit[2005]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7fffd744f130 a2=0 a3=0 items=0 ppid=1836 pid=2005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:13.695000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 24 00:42:13.716000 audit[2007]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2007 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:42:13.716000 audit[2007]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffc17275810 a2=0 a3=0 items=0 ppid=1836 pid=2007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:13.716000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 24 00:42:13.890000 audit[2015]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2015 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:42:13.890000 audit[2015]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffd31f0bf30 a2=0 a3=0 items=0 ppid=1836 pid=2015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:13.890000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 24 00:42:14.185000 audit[2021]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2021 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:42:14.185000 audit[2021]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffc1c4691d0 a2=0 a3=0 items=0 ppid=1836 pid=2021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:14.185000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 24 00:42:14.281000 audit[2023]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2023 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:42:14.281000 audit[2023]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7fff2e503f40 a2=0 a3=0 items=0 ppid=1836 pid=2023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:14.281000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 24 00:42:14.314000 audit[2025]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2025 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:42:14.314000 audit[2025]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff7ba7ed60 a2=0 a3=0 items=0 ppid=1836 pid=2025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:14.314000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 24 00:42:14.345000 audit[2027]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2027 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:42:14.345000 audit[2027]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffc987820c0 a2=0 a3=0 items=0 ppid=1836 pid=2027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:14.345000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 24 00:42:14.375000 audit[2029]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2029 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:42:14.381751 systemd-networkd[1505]: docker0: Link UP Jan 24 00:42:14.375000 audit[2029]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd1a3d8780 a2=0 a3=0 items=0 ppid=1836 pid=2029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:42:14.375000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 24 00:42:14.453986 dockerd[1836]: time="2026-01-24T00:42:14.453108326Z" level=info msg="Loading containers: done." Jan 24 00:42:14.779264 dockerd[1836]: time="2026-01-24T00:42:14.775698983Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 24 00:42:14.779264 dockerd[1836]: time="2026-01-24T00:42:14.776733474Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 24 00:42:14.779264 dockerd[1836]: time="2026-01-24T00:42:14.777981775Z" level=info msg="Initializing buildkit" Jan 24 00:42:16.109887 dockerd[1836]: time="2026-01-24T00:42:16.105029231Z" level=info msg="Completed buildkit initialization" Jan 24 00:42:16.351322 dockerd[1836]: time="2026-01-24T00:42:16.322261224Z" level=info msg="Daemon has completed initialization" Jan 24 00:42:16.353600 dockerd[1836]: time="2026-01-24T00:42:16.352778475Z" level=info msg="API listen on /run/docker.sock" Jan 24 00:42:16.381070 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 24 00:42:16.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:42:18.872060 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 24 00:42:18.943687 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:42:21.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:42:21.626793 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:42:21.653661 kernel: kauditd_printk_skb: 42 callbacks suppressed Jan 24 00:42:21.653949 kernel: audit: type=1130 audit(1769215341.624:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:42:21.699342 (kubelet)[2079]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:42:22.163272 containerd[1607]: time="2026-01-24T00:42:22.162771871Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 24 00:42:22.367892 kubelet[2079]: E0124 00:42:22.367672 2079 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:42:22.380047 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:42:22.381647 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:42:22.381000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:42:22.382778 systemd[1]: kubelet.service: Consumed 1.759s CPU time, 110.4M memory peak. Jan 24 00:42:22.401514 kernel: audit: type=1131 audit(1769215342.381:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:42:23.908334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2275488731.mount: Deactivated successfully. Jan 24 00:42:28.253684 containerd[1607]: time="2026-01-24T00:42:28.253107136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:42:28.258599 containerd[1607]: time="2026-01-24T00:42:28.258557514Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=26079939" Jan 24 00:42:28.266968 containerd[1607]: time="2026-01-24T00:42:28.266122594Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:42:28.274059 containerd[1607]: time="2026-01-24T00:42:28.272584139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:42:28.277192 containerd[1607]: time="2026-01-24T00:42:28.276786319Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 6.113900494s" Jan 24 00:42:28.277192 containerd[1607]: time="2026-01-24T00:42:28.276874926Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 24 00:42:28.280934 containerd[1607]: time="2026-01-24T00:42:28.280694620Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 24 00:42:32.603734 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 24 00:42:32.613485 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:42:33.810394 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:42:33.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:42:33.847355 kernel: audit: type=1130 audit(1769215353.811:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:42:33.870490 (kubelet)[2160]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:42:34.985765 containerd[1607]: time="2026-01-24T00:42:34.975467712Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:42:35.485689 kubelet[2160]: E0124 00:42:35.478477 2160 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:42:35.602693 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:42:35.609313 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:42:35.637365 systemd[1]: kubelet.service: Consumed 1.851s CPU time, 110.3M memory peak. Jan 24 00:42:35.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:42:35.717375 kernel: audit: type=1131 audit(1769215355.634:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:42:36.659682 update_engine[1586]: I20260124 00:42:36.656837 1586 update_attempter.cc:509] Updating boot flags... Jan 24 00:42:36.713305 containerd[1607]: time="2026-01-24T00:42:36.713246537Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21154285" Jan 24 00:42:36.734385 containerd[1607]: time="2026-01-24T00:42:36.731486019Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:42:36.751890 containerd[1607]: time="2026-01-24T00:42:36.751332685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:42:36.753488 containerd[1607]: time="2026-01-24T00:42:36.753079005Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 8.472346393s" Jan 24 00:42:36.753488 containerd[1607]: time="2026-01-24T00:42:36.753132485Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 24 00:42:36.758948 containerd[1607]: time="2026-01-24T00:42:36.758406882Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 24 00:42:45.817095 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 24 00:42:45.832688 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:42:46.488614 containerd[1607]: time="2026-01-24T00:42:46.486804323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:42:46.541251 containerd[1607]: time="2026-01-24T00:42:46.540607704Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15717792" Jan 24 00:42:46.550665 containerd[1607]: time="2026-01-24T00:42:46.550361902Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:42:46.577907 containerd[1607]: time="2026-01-24T00:42:46.577779149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:42:46.583109 containerd[1607]: time="2026-01-24T00:42:46.583050042Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 9.824589741s" Jan 24 00:42:46.583493 containerd[1607]: time="2026-01-24T00:42:46.583464737Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 24 00:42:46.588936 containerd[1607]: time="2026-01-24T00:42:46.588376110Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 24 00:42:47.809796 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:42:47.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:42:47.853322 kernel: audit: type=1130 audit(1769215367.809:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:42:47.876288 (kubelet)[2195]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:42:49.300819 kubelet[2195]: E0124 00:42:49.300328 2195 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:42:49.313892 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:42:49.314934 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:42:49.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:42:49.319034 systemd[1]: kubelet.service: Consumed 1.694s CPU time, 110.1M memory peak. Jan 24 00:42:49.357488 kernel: audit: type=1131 audit(1769215369.317:294): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:42:53.586756 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2133924512.mount: Deactivated successfully. Jan 24 00:42:57.649845 containerd[1607]: time="2026-01-24T00:42:57.648738393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:42:57.653018 containerd[1607]: time="2026-01-24T00:42:57.652547748Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25961571" Jan 24 00:42:57.659814 containerd[1607]: time="2026-01-24T00:42:57.658319110Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:42:57.664350 containerd[1607]: time="2026-01-24T00:42:57.664309984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:42:57.665103 containerd[1607]: time="2026-01-24T00:42:57.665048887Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 11.076635638s" Jan 24 00:42:57.665723 containerd[1607]: time="2026-01-24T00:42:57.665438694Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 24 00:42:57.671963 containerd[1607]: time="2026-01-24T00:42:57.671629309Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 24 00:42:59.359593 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 24 00:42:59.378863 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:42:59.873076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1133219320.mount: Deactivated successfully. Jan 24 00:43:00.218726 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:43:00.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:43:00.258483 kernel: audit: type=1130 audit(1769215380.218:295): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:43:00.267009 (kubelet)[2231]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:43:00.509544 kubelet[2231]: E0124 00:43:00.508899 2231 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:43:00.518443 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:43:00.518841 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:43:00.519714 systemd[1]: kubelet.service: Consumed 575ms CPU time, 112.2M memory peak. Jan 24 00:43:00.518000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:43:00.560296 kernel: audit: type=1131 audit(1769215380.518:296): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:43:06.643467 containerd[1607]: time="2026-01-24T00:43:06.643040055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:43:06.648696 containerd[1607]: time="2026-01-24T00:43:06.647926214Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22376942" Jan 24 00:43:06.651408 containerd[1607]: time="2026-01-24T00:43:06.651242618Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:43:06.656567 containerd[1607]: time="2026-01-24T00:43:06.656317709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:43:06.657847 containerd[1607]: time="2026-01-24T00:43:06.657677356Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 8.984065153s" Jan 24 00:43:06.657847 containerd[1607]: time="2026-01-24T00:43:06.657804183Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 24 00:43:06.660578 containerd[1607]: time="2026-01-24T00:43:06.660452902Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 24 00:43:07.682296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3500842330.mount: Deactivated successfully. Jan 24 00:43:07.706023 containerd[1607]: time="2026-01-24T00:43:07.705865141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:43:07.709359 containerd[1607]: time="2026-01-24T00:43:07.709233126Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Jan 24 00:43:07.711455 containerd[1607]: time="2026-01-24T00:43:07.711073830Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:43:07.716997 containerd[1607]: time="2026-01-24T00:43:07.716860432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:43:07.719574 containerd[1607]: time="2026-01-24T00:43:07.718250112Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 1.057726327s" Jan 24 00:43:07.719574 containerd[1607]: time="2026-01-24T00:43:07.718284686Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 24 00:43:07.720953 containerd[1607]: time="2026-01-24T00:43:07.720071614Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 24 00:43:09.113889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1428014597.mount: Deactivated successfully. Jan 24 00:43:10.599309 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 24 00:43:10.602942 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:43:10.948087 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:43:10.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:43:10.960438 kernel: audit: type=1130 audit(1769215390.947:297): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:43:10.967640 (kubelet)[2346]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:43:11.065841 kubelet[2346]: E0124 00:43:11.065715 2346 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:43:11.070994 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:43:11.071642 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:43:11.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:43:11.072525 systemd[1]: kubelet.service: Consumed 336ms CPU time, 109.8M memory peak. Jan 24 00:43:11.085271 kernel: audit: type=1131 audit(1769215391.071:298): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:43:13.721108 containerd[1607]: time="2026-01-24T00:43:13.720428053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:43:13.724472 containerd[1607]: time="2026-01-24T00:43:13.724395813Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73002931" Jan 24 00:43:13.728676 containerd[1607]: time="2026-01-24T00:43:13.728532754Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:43:13.736377 containerd[1607]: time="2026-01-24T00:43:13.736254315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:43:13.739401 containerd[1607]: time="2026-01-24T00:43:13.738021265Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 6.017914065s" Jan 24 00:43:13.739401 containerd[1607]: time="2026-01-24T00:43:13.738103068Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 24 00:43:17.875585 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:43:17.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:43:17.875979 systemd[1]: kubelet.service: Consumed 336ms CPU time, 109.8M memory peak. Jan 24 00:43:17.880939 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:43:17.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:43:17.905006 kernel: audit: type=1130 audit(1769215397.874:299): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:43:17.905117 kernel: audit: type=1131 audit(1769215397.874:300): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:43:17.971401 systemd[1]: Reload requested from client PID 2386 ('systemctl') (unit session-8.scope)... Jan 24 00:43:17.971421 systemd[1]: Reloading... Jan 24 00:43:18.374425 zram_generator::config[2435]: No configuration found. Jan 24 00:43:19.196666 systemd[1]: Reloading finished in 1224 ms. Jan 24 00:43:19.279000 audit: BPF prog-id=63 op=LOAD Jan 24 00:43:19.279000 audit: BPF prog-id=60 op=UNLOAD Jan 24 00:43:19.299960 kernel: audit: type=1334 audit(1769215399.279:301): prog-id=63 op=LOAD Jan 24 00:43:19.300123 kernel: audit: type=1334 audit(1769215399.279:302): prog-id=60 op=UNLOAD Jan 24 00:43:19.280000 audit: BPF prog-id=64 op=LOAD Jan 24 00:43:19.304499 kernel: audit: type=1334 audit(1769215399.280:303): prog-id=64 op=LOAD Jan 24 00:43:19.304558 kernel: audit: type=1334 audit(1769215399.280:304): prog-id=65 op=LOAD Jan 24 00:43:19.280000 audit: BPF prog-id=65 op=LOAD Jan 24 00:43:19.280000 audit: BPF prog-id=61 op=UNLOAD Jan 24 00:43:19.280000 audit: BPF prog-id=62 op=UNLOAD Jan 24 00:43:19.322410 kernel: audit: type=1334 audit(1769215399.280:305): prog-id=61 op=UNLOAD Jan 24 00:43:19.343587 kernel: audit: type=1334 audit(1769215399.280:306): prog-id=62 op=UNLOAD Jan 24 00:43:19.346657 kernel: audit: type=1334 audit(1769215399.284:307): prog-id=66 op=LOAD Jan 24 00:43:19.347121 kernel: audit: type=1334 audit(1769215399.284:308): prog-id=55 op=UNLOAD Jan 24 00:43:19.284000 audit: BPF prog-id=66 op=LOAD Jan 24 00:43:19.284000 audit: BPF prog-id=55 op=UNLOAD Jan 24 00:43:19.284000 audit: BPF prog-id=67 op=LOAD Jan 24 00:43:19.284000 audit: BPF prog-id=68 op=LOAD Jan 24 00:43:19.284000 audit: BPF prog-id=56 op=UNLOAD Jan 24 00:43:19.284000 audit: BPF prog-id=57 op=UNLOAD Jan 24 00:43:19.286000 audit: BPF prog-id=69 op=LOAD Jan 24 00:43:19.286000 audit: BPF prog-id=49 op=UNLOAD Jan 24 00:43:19.286000 audit: BPF prog-id=70 op=LOAD Jan 24 00:43:19.286000 audit: BPF prog-id=71 op=LOAD Jan 24 00:43:19.286000 audit: BPF prog-id=50 op=UNLOAD Jan 24 00:43:19.286000 audit: BPF prog-id=51 op=UNLOAD Jan 24 00:43:19.287000 audit: BPF prog-id=72 op=LOAD Jan 24 00:43:19.287000 audit: BPF prog-id=45 op=UNLOAD Jan 24 00:43:19.287000 audit: BPF prog-id=73 op=LOAD Jan 24 00:43:19.287000 audit: BPF prog-id=74 op=LOAD Jan 24 00:43:19.287000 audit: BPF prog-id=46 op=UNLOAD Jan 24 00:43:19.287000 audit: BPF prog-id=47 op=UNLOAD Jan 24 00:43:19.289000 audit: BPF prog-id=75 op=LOAD Jan 24 00:43:19.289000 audit: BPF prog-id=76 op=LOAD Jan 24 00:43:19.289000 audit: BPF prog-id=43 op=UNLOAD Jan 24 00:43:19.289000 audit: BPF prog-id=44 op=UNLOAD Jan 24 00:43:19.298000 audit: BPF prog-id=77 op=LOAD Jan 24 00:43:19.298000 audit: BPF prog-id=59 op=UNLOAD Jan 24 00:43:19.355000 audit: BPF prog-id=78 op=LOAD Jan 24 00:43:19.355000 audit: BPF prog-id=52 op=UNLOAD Jan 24 00:43:19.358000 audit: BPF prog-id=79 op=LOAD Jan 24 00:43:19.358000 audit: BPF prog-id=80 op=LOAD Jan 24 00:43:19.359000 audit: BPF prog-id=53 op=UNLOAD Jan 24 00:43:19.359000 audit: BPF prog-id=54 op=UNLOAD Jan 24 00:43:19.360000 audit: BPF prog-id=81 op=LOAD Jan 24 00:43:19.360000 audit: BPF prog-id=58 op=UNLOAD Jan 24 00:43:19.362000 audit: BPF prog-id=82 op=LOAD Jan 24 00:43:19.362000 audit: BPF prog-id=48 op=UNLOAD Jan 24 00:43:19.427551 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 24 00:43:19.427819 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 24 00:43:19.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:43:19.429576 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:43:19.430032 systemd[1]: kubelet.service: Consumed 304ms CPU time, 98.4M memory peak. Jan 24 00:43:19.438824 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:43:20.008822 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:43:20.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:43:20.047811 (kubelet)[2480]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:43:20.669238 kubelet[2480]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:43:20.669238 kubelet[2480]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:43:20.669238 kubelet[2480]: I0124 00:43:20.666981 2480 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:43:21.740337 kubelet[2480]: I0124 00:43:21.739214 2480 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 24 00:43:21.740337 kubelet[2480]: I0124 00:43:21.739435 2480 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:43:21.740337 kubelet[2480]: I0124 00:43:21.739509 2480 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 24 00:43:21.740337 kubelet[2480]: I0124 00:43:21.740003 2480 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:43:21.742631 kubelet[2480]: I0124 00:43:21.741005 2480 server.go:956] "Client rotation is on, will bootstrap in background" Jan 24 00:43:21.896714 kubelet[2480]: E0124 00:43:21.895609 2480 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.71:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 24 00:43:21.896714 kubelet[2480]: I0124 00:43:21.896678 2480 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:43:21.918296 kubelet[2480]: I0124 00:43:21.915619 2480 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 24 00:43:21.948596 kubelet[2480]: I0124 00:43:21.948460 2480 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 24 00:43:21.948924 kubelet[2480]: I0124 00:43:21.948820 2480 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:43:21.949174 kubelet[2480]: I0124 00:43:21.948853 2480 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:43:21.949404 kubelet[2480]: I0124 00:43:21.949285 2480 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:43:21.949404 kubelet[2480]: I0124 00:43:21.949303 2480 container_manager_linux.go:306] "Creating device plugin manager" Jan 24 00:43:21.949461 kubelet[2480]: I0124 00:43:21.949437 2480 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 24 00:43:21.957569 kubelet[2480]: I0124 00:43:21.957399 2480 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:43:21.958419 kubelet[2480]: I0124 00:43:21.958298 2480 kubelet.go:475] "Attempting to sync node with API server" Jan 24 00:43:21.958419 kubelet[2480]: I0124 00:43:21.958366 2480 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:43:21.958419 kubelet[2480]: I0124 00:43:21.958396 2480 kubelet.go:387] "Adding apiserver pod source" Jan 24 00:43:21.958841 kubelet[2480]: I0124 00:43:21.958425 2480 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:43:21.961483 kubelet[2480]: E0124 00:43:21.961287 2480 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 24 00:43:21.964353 kubelet[2480]: E0124 00:43:21.964131 2480 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.71:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 24 00:43:21.973747 kubelet[2480]: I0124 00:43:21.972406 2480 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 24 00:43:21.976358 kubelet[2480]: I0124 00:43:21.975824 2480 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 24 00:43:21.976358 kubelet[2480]: I0124 00:43:21.975952 2480 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 24 00:43:21.976358 kubelet[2480]: W0124 00:43:21.976042 2480 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 24 00:43:21.989336 kubelet[2480]: I0124 00:43:21.986859 2480 server.go:1262] "Started kubelet" Jan 24 00:43:21.989336 kubelet[2480]: I0124 00:43:21.988936 2480 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:43:21.994304 kubelet[2480]: I0124 00:43:21.993044 2480 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:43:21.994304 kubelet[2480]: I0124 00:43:21.993792 2480 server.go:310] "Adding debug handlers to kubelet server" Jan 24 00:43:22.002737 kubelet[2480]: I0124 00:43:21.999383 2480 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:43:22.012693 kubelet[2480]: I0124 00:43:22.012487 2480 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 24 00:43:22.014692 kubelet[2480]: E0124 00:43:22.012775 2480 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:43:22.014692 kubelet[2480]: I0124 00:43:22.013378 2480 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 24 00:43:22.014692 kubelet[2480]: I0124 00:43:22.013489 2480 reconciler.go:29] "Reconciler: start to sync state" Jan 24 00:43:22.039693 kubelet[2480]: I0124 00:43:22.017395 2480 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:43:22.039693 kubelet[2480]: E0124 00:43:22.017658 2480 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 24 00:43:22.039693 kubelet[2480]: E0124 00:43:22.017807 2480 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="200ms" Jan 24 00:43:22.039693 kubelet[2480]: E0124 00:43:22.022850 2480 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:43:22.039693 kubelet[2480]: I0124 00:43:22.023065 2480 factory.go:223] Registration of the containerd container factory successfully Jan 24 00:43:22.039693 kubelet[2480]: I0124 00:43:22.023081 2480 factory.go:223] Registration of the systemd container factory successfully Jan 24 00:43:22.049437 kubelet[2480]: I0124 00:43:22.049380 2480 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:43:22.049651 kubelet[2480]: I0124 00:43:22.049628 2480 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 24 00:43:22.045000 audit[2498]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2498 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:43:22.045000 audit[2498]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fffac6b1340 a2=0 a3=0 items=0 ppid=2480 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:22.045000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 24 00:43:22.052566 kubelet[2480]: I0124 00:43:22.052501 2480 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:43:22.056000 audit[2499]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2499 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:43:22.056000 audit[2499]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd237e9180 a2=0 a3=0 items=0 ppid=2480 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:22.056000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 24 00:43:22.076814 kubelet[2480]: E0124 00:43:22.020835 2480 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.71:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.71:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188d8408ce927f91 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-24 00:43:21.986785169 +0000 UTC m=+1.913661368,LastTimestamp:2026-01-24 00:43:21.986785169 +0000 UTC m=+1.913661368,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 24 00:43:22.080000 audit[2501]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2501 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:43:22.080000 audit[2501]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc44dccd80 a2=0 a3=0 items=0 ppid=2480 pid=2501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:22.080000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 24 00:43:22.098000 audit[2505]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2505 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:43:22.098000 audit[2505]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fffffee8e70 a2=0 a3=0 items=0 ppid=2480 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:22.098000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 24 00:43:22.114713 kubelet[2480]: E0124 00:43:22.114486 2480 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:43:22.120075 kubelet[2480]: I0124 00:43:22.119658 2480 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:43:22.120075 kubelet[2480]: I0124 00:43:22.119680 2480 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:43:22.120075 kubelet[2480]: I0124 00:43:22.119704 2480 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:43:22.130764 kubelet[2480]: I0124 00:43:22.127062 2480 policy_none.go:49] "None policy: Start" Jan 24 00:43:22.130764 kubelet[2480]: I0124 00:43:22.127402 2480 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 24 00:43:22.130764 kubelet[2480]: I0124 00:43:22.127425 2480 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 24 00:43:22.136358 kubelet[2480]: I0124 00:43:22.136110 2480 policy_none.go:47] "Start" Jan 24 00:43:22.167347 kubelet[2480]: I0124 00:43:22.167267 2480 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 24 00:43:22.163000 audit[2511]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2511 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:43:22.163000 audit[2511]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffd41d70360 a2=0 a3=0 items=0 ppid=2480 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:22.163000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F380000002D2D737263003132372E Jan 24 00:43:22.172000 audit[2512]: NETFILTER_CFG table=mangle:47 family=2 entries=1 op=nft_register_chain pid=2512 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:43:22.172000 audit[2512]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe2553f190 a2=0 a3=0 items=0 ppid=2480 pid=2512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:22.172000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 24 00:43:22.186628 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 24 00:43:22.194000 audit[2515]: NETFILTER_CFG table=nat:48 family=2 entries=1 op=nft_register_chain pid=2515 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:43:22.194000 audit[2515]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffed74230c0 a2=0 a3=0 items=0 ppid=2480 pid=2515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:22.194000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 24 00:43:22.204000 audit[2517]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2517 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:43:22.204000 audit[2517]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcbbb59de0 a2=0 a3=0 items=0 ppid=2480 pid=2517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:22.204000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 24 00:43:22.212714 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 24 00:43:22.214634 kubelet[2480]: E0124 00:43:22.214602 2480 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:43:22.224093 kubelet[2480]: E0124 00:43:22.222774 2480 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="400ms" Jan 24 00:43:22.224000 audit[2513]: NETFILTER_CFG table=mangle:50 family=10 entries=2 op=nft_register_chain pid=2513 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:43:22.224000 audit[2513]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffeb8bd6f00 a2=0 a3=0 items=0 ppid=2480 pid=2513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:22.224000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 24 00:43:22.231314 kubelet[2480]: I0124 00:43:22.231050 2480 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 24 00:43:22.231975 kubelet[2480]: I0124 00:43:22.231483 2480 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 24 00:43:22.231975 kubelet[2480]: I0124 00:43:22.231568 2480 kubelet.go:2427] "Starting kubelet main sync loop" Jan 24 00:43:22.231975 kubelet[2480]: E0124 00:43:22.231636 2480 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:43:22.233029 kubelet[2480]: E0124 00:43:22.232820 2480 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 24 00:43:22.235000 audit[2518]: NETFILTER_CFG table=mangle:51 family=10 entries=1 op=nft_register_chain pid=2518 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:43:22.235000 audit[2518]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd95f40a10 a2=0 a3=0 items=0 ppid=2480 pid=2518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:22.235000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 24 00:43:22.239000 audit[2519]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2519 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:43:22.239000 audit[2519]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd2c99a270 a2=0 a3=0 items=0 ppid=2480 pid=2519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:22.239000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 24 00:43:22.244000 audit[2520]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2520 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:43:22.244000 audit[2520]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff6dd1c540 a2=0 a3=0 items=0 ppid=2480 pid=2520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:22.244000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 24 00:43:22.267497 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 24 00:43:22.282309 kubelet[2480]: E0124 00:43:22.282231 2480 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 24 00:43:22.282602 kubelet[2480]: I0124 00:43:22.282538 2480 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:43:22.282644 kubelet[2480]: I0124 00:43:22.282596 2480 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:43:22.284272 kubelet[2480]: I0124 00:43:22.283845 2480 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:43:22.289736 kubelet[2480]: E0124 00:43:22.289625 2480 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:43:22.289798 kubelet[2480]: E0124 00:43:22.289748 2480 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 24 00:43:22.366946 systemd[1]: Created slice kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice - libcontainer container kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice. Jan 24 00:43:22.389436 kubelet[2480]: I0124 00:43:22.388433 2480 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:43:22.389436 kubelet[2480]: E0124 00:43:22.388933 2480 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" Jan 24 00:43:22.405518 kubelet[2480]: E0124 00:43:22.405373 2480 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:43:22.417203 kubelet[2480]: I0124 00:43:22.415563 2480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 24 00:43:22.417203 kubelet[2480]: I0124 00:43:22.415603 2480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b0676e97f4f113e7c6261f20a3f83dcd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b0676e97f4f113e7c6261f20a3f83dcd\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:43:22.417203 kubelet[2480]: I0124 00:43:22.415625 2480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b0676e97f4f113e7c6261f20a3f83dcd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b0676e97f4f113e7c6261f20a3f83dcd\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:43:22.417203 kubelet[2480]: I0124 00:43:22.415650 2480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:43:22.417203 kubelet[2480]: I0124 00:43:22.415670 2480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:43:22.415944 systemd[1]: Created slice kubepods-burstable-podb0676e97f4f113e7c6261f20a3f83dcd.slice - libcontainer container kubepods-burstable-podb0676e97f4f113e7c6261f20a3f83dcd.slice. Jan 24 00:43:22.417618 kubelet[2480]: I0124 00:43:22.415689 2480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b0676e97f4f113e7c6261f20a3f83dcd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b0676e97f4f113e7c6261f20a3f83dcd\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:43:22.417618 kubelet[2480]: I0124 00:43:22.415706 2480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:43:22.417618 kubelet[2480]: I0124 00:43:22.415725 2480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:43:22.417618 kubelet[2480]: I0124 00:43:22.415745 2480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:43:22.422816 kubelet[2480]: E0124 00:43:22.421976 2480 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:43:22.457070 systemd[1]: Created slice kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice - libcontainer container kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice. Jan 24 00:43:22.464603 kubelet[2480]: E0124 00:43:22.463760 2480 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:43:22.593349 kubelet[2480]: I0124 00:43:22.591824 2480 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:43:22.593349 kubelet[2480]: E0124 00:43:22.592652 2480 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" Jan 24 00:43:22.627091 kubelet[2480]: E0124 00:43:22.626791 2480 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="800ms" Jan 24 00:43:22.719416 kubelet[2480]: E0124 00:43:22.717014 2480 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:22.721131 containerd[1607]: time="2026-01-24T00:43:22.718828681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,}" Jan 24 00:43:22.733719 kubelet[2480]: E0124 00:43:22.733512 2480 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:22.734610 containerd[1607]: time="2026-01-24T00:43:22.734481033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b0676e97f4f113e7c6261f20a3f83dcd,Namespace:kube-system,Attempt:0,}" Jan 24 00:43:22.777810 kubelet[2480]: E0124 00:43:22.777641 2480 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:22.779240 containerd[1607]: time="2026-01-24T00:43:22.779041492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,}" Jan 24 00:43:23.005358 kubelet[2480]: I0124 00:43:23.004797 2480 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:43:23.005358 kubelet[2480]: E0124 00:43:23.005261 2480 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" Jan 24 00:43:23.211792 kubelet[2480]: E0124 00:43:23.210707 2480 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 24 00:43:23.254463 kubelet[2480]: E0124 00:43:23.254039 2480 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.71:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 24 00:43:23.315612 kubelet[2480]: E0124 00:43:23.315414 2480 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 24 00:43:23.429382 kubelet[2480]: E0124 00:43:23.429283 2480 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="1.6s" Jan 24 00:43:23.506660 kubelet[2480]: E0124 00:43:23.505711 2480 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 24 00:43:23.657996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount971481154.mount: Deactivated successfully. Jan 24 00:43:23.677702 containerd[1607]: time="2026-01-24T00:43:23.677524325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:43:23.687800 containerd[1607]: time="2026-01-24T00:43:23.687569381Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 24 00:43:23.700360 containerd[1607]: time="2026-01-24T00:43:23.698819758Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:43:23.708619 containerd[1607]: time="2026-01-24T00:43:23.707271108Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:43:23.718426 containerd[1607]: time="2026-01-24T00:43:23.718281635Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:43:23.721045 containerd[1607]: time="2026-01-24T00:43:23.720729592Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 24 00:43:23.722807 containerd[1607]: time="2026-01-24T00:43:23.722644192Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 24 00:43:23.725423 containerd[1607]: time="2026-01-24T00:43:23.725258224Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:43:23.729646 containerd[1607]: time="2026-01-24T00:43:23.729421430Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 993.395999ms" Jan 24 00:43:23.729948 containerd[1607]: time="2026-01-24T00:43:23.729820130Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 986.799572ms" Jan 24 00:43:23.735408 containerd[1607]: time="2026-01-24T00:43:23.734328846Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 942.945472ms" Jan 24 00:43:23.811661 kubelet[2480]: I0124 00:43:23.811623 2480 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:43:23.815749 kubelet[2480]: E0124 00:43:23.812076 2480 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" Jan 24 00:43:23.908646 kubelet[2480]: E0124 00:43:23.907818 2480 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.71:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 24 00:43:23.912646 containerd[1607]: time="2026-01-24T00:43:23.912216736Z" level=info msg="connecting to shim b36800f3b9c3baaafc52ded6093f1c5fa6117d9e2895748ecf86c38800826174" address="unix:///run/containerd/s/4a8c8e22193b92eb76adb623c6f9eedce4a631e3460163f87a83dd331b30adea" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:43:23.930944 containerd[1607]: time="2026-01-24T00:43:23.930689829Z" level=info msg="connecting to shim ed35f71a7fd660e12e78ac4af0479b9e52551a5266b41cca5a251e1aa7ef8769" address="unix:///run/containerd/s/47067c65300e29f056edddb1d5b5ae16a7f90e28ab4bf538bee53896193a04bc" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:43:23.966272 containerd[1607]: time="2026-01-24T00:43:23.966038884Z" level=info msg="connecting to shim 2f3b458a0886dbfa46b15525bf8a68277c6f29a7226ec788bcfbd864a1229ae1" address="unix:///run/containerd/s/2c98284bf477fd65cdc9a348333b652957b93de750cd1972d69f5671a1805bf2" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:43:24.077545 systemd[1]: Started cri-containerd-ed35f71a7fd660e12e78ac4af0479b9e52551a5266b41cca5a251e1aa7ef8769.scope - libcontainer container ed35f71a7fd660e12e78ac4af0479b9e52551a5266b41cca5a251e1aa7ef8769. Jan 24 00:43:24.369068 systemd[1]: Started cri-containerd-2f3b458a0886dbfa46b15525bf8a68277c6f29a7226ec788bcfbd864a1229ae1.scope - libcontainer container 2f3b458a0886dbfa46b15525bf8a68277c6f29a7226ec788bcfbd864a1229ae1. Jan 24 00:43:24.379000 audit: BPF prog-id=83 op=LOAD Jan 24 00:43:24.384946 kernel: kauditd_printk_skb: 70 callbacks suppressed Jan 24 00:43:24.385022 kernel: audit: type=1334 audit(1769215404.379:355): prog-id=83 op=LOAD Jan 24 00:43:24.391000 audit: BPF prog-id=84 op=LOAD Jan 24 00:43:24.391000 audit[2578]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2544 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:24.404334 systemd[1]: Started cri-containerd-b36800f3b9c3baaafc52ded6093f1c5fa6117d9e2895748ecf86c38800826174.scope - libcontainer container b36800f3b9c3baaafc52ded6093f1c5fa6117d9e2895748ecf86c38800826174. Jan 24 00:43:24.425512 kernel: audit: type=1334 audit(1769215404.391:356): prog-id=84 op=LOAD Jan 24 00:43:24.425603 kernel: audit: type=1300 audit(1769215404.391:356): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2544 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:24.391000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564333566373161376664363630653132653738616334616630343739 Jan 24 00:43:24.464498 kernel: audit: type=1327 audit(1769215404.391:356): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564333566373161376664363630653132653738616334616630343739 Jan 24 00:43:24.464567 kernel: audit: type=1334 audit(1769215404.391:357): prog-id=84 op=UNLOAD Jan 24 00:43:24.391000 audit: BPF prog-id=84 op=UNLOAD Jan 24 00:43:24.482713 kernel: audit: type=1300 audit(1769215404.391:357): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2544 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:24.391000 audit[2578]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2544 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:24.391000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564333566373161376664363630653132653738616334616630343739 Jan 24 00:43:24.502250 kernel: audit: type=1327 audit(1769215404.391:357): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564333566373161376664363630653132653738616334616630343739 Jan 24 00:43:24.503290 kernel: audit: type=1334 audit(1769215404.392:358): prog-id=85 op=LOAD Jan 24 00:43:24.392000 audit: BPF prog-id=85 op=LOAD Jan 24 00:43:24.511848 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Jan 24 00:43:24.511962 kernel: audit: type=1300 audit(1769215404.392:358): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2544 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:24.392000 audit[2578]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2544 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:24.392000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564333566373161376664363630653132653738616334616630343739 Jan 24 00:43:24.393000 audit: BPF prog-id=86 op=LOAD Jan 24 00:43:24.393000 audit[2578]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2544 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:24.393000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564333566373161376664363630653132653738616334616630343739 Jan 24 00:43:24.394000 audit: BPF prog-id=86 op=UNLOAD Jan 24 00:43:24.394000 audit[2578]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2544 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:24.394000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564333566373161376664363630653132653738616334616630343739 Jan 24 00:43:24.394000 audit: BPF prog-id=85 op=UNLOAD Jan 24 00:43:24.394000 audit[2578]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2544 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:24.394000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564333566373161376664363630653132653738616334616630343739 Jan 24 00:43:24.394000 audit: BPF prog-id=87 op=LOAD Jan 24 00:43:24.394000 audit[2578]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2544 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:24.394000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564333566373161376664363630653132653738616334616630343739 Jan 24 00:43:24.451000 audit: BPF prog-id=88 op=LOAD Jan 24 00:43:24.452000 audit: BPF prog-id=89 op=LOAD Jan 24 00:43:24.452000 audit[2590]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2535 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:24.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233363830306633623963336261616166633532646564363039336631 Jan 24 00:43:24.452000 audit: BPF prog-id=89 op=UNLOAD Jan 24 00:43:24.452000 audit[2590]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2535 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:24.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233363830306633623963336261616166633532646564363039336631 Jan 24 00:43:24.452000 audit: BPF prog-id=90 op=LOAD Jan 24 00:43:24.452000 audit[2590]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2535 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:24.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233363830306633623963336261616166633532646564363039336631 Jan 24 00:43:24.452000 audit: BPF prog-id=91 op=LOAD Jan 24 00:43:24.452000 audit[2590]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2535 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:24.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233363830306633623963336261616166633532646564363039336631 Jan 24 00:43:24.452000 audit: BPF prog-id=91 op=UNLOAD Jan 24 00:43:24.452000 audit[2590]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2535 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:24.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233363830306633623963336261616166633532646564363039336631 Jan 24 00:43:24.452000 audit: BPF prog-id=90 op=UNLOAD Jan 24 00:43:24.452000 audit[2590]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2535 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:24.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233363830306633623963336261616166633532646564363039336631 Jan 24 00:43:24.453000 audit: BPF prog-id=92 op=LOAD Jan 24 00:43:24.453000 audit[2590]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2535 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:24.453000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233363830306633623963336261616166633532646564363039336631 Jan 24 00:43:24.466000 audit: BPF prog-id=93 op=LOAD Jan 24 00:43:24.468000 audit: BPF prog-id=94 op=LOAD Jan 24 00:43:24.468000 audit[2579]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2560 pid=2579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:24.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266336234353861303838366462666134366231353532356266386136 Jan 24 00:43:24.468000 audit: BPF prog-id=94 op=UNLOAD Jan 24 00:43:24.468000 audit[2579]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2560 pid=2579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:24.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266336234353861303838366462666134366231353532356266386136 Jan 24 00:43:24.469000 audit: BPF prog-id=95 op=LOAD Jan 24 00:43:24.469000 audit[2579]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2560 pid=2579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:24.469000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266336234353861303838366462666134366231353532356266386136 Jan 24 00:43:24.469000 audit: BPF prog-id=96 op=LOAD Jan 24 00:43:24.469000 audit[2579]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2560 pid=2579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:24.469000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266336234353861303838366462666134366231353532356266386136 Jan 24 00:43:24.511000 audit: BPF prog-id=95 op=UNLOAD Jan 24 00:43:24.511000 audit[2579]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2560 pid=2579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:24.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266336234353861303838366462666134366231353532356266386136 Jan 24 00:43:24.549000 audit: BPF prog-id=97 op=LOAD Jan 24 00:43:24.549000 audit[2579]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2560 pid=2579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:24.549000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266336234353861303838366462666134366231353532356266386136 Jan 24 00:43:24.572860 containerd[1607]: time="2026-01-24T00:43:24.572735639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed35f71a7fd660e12e78ac4af0479b9e52551a5266b41cca5a251e1aa7ef8769\"" Jan 24 00:43:24.580652 kubelet[2480]: E0124 00:43:24.580313 2480 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:24.604401 containerd[1607]: time="2026-01-24T00:43:24.604260526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,} returns sandbox id \"b36800f3b9c3baaafc52ded6093f1c5fa6117d9e2895748ecf86c38800826174\"" Jan 24 00:43:24.605494 containerd[1607]: time="2026-01-24T00:43:24.605035352Z" level=info msg="CreateContainer within sandbox \"ed35f71a7fd660e12e78ac4af0479b9e52551a5266b41cca5a251e1aa7ef8769\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 24 00:43:24.607129 kubelet[2480]: E0124 00:43:24.606658 2480 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:24.614544 containerd[1607]: time="2026-01-24T00:43:24.614508682Z" level=info msg="CreateContainer within sandbox \"b36800f3b9c3baaafc52ded6093f1c5fa6117d9e2895748ecf86c38800826174\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 24 00:43:24.665369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4212455997.mount: Deactivated successfully. Jan 24 00:43:24.673531 containerd[1607]: time="2026-01-24T00:43:24.670819085Z" level=info msg="Container 22811ac0ee1d6f3ef77d126e4c3f76ef2b35528b99d9f11385460beeb83748b6: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:43:24.680653 containerd[1607]: time="2026-01-24T00:43:24.680493155Z" level=info msg="Container dd01aaa67fa3b17854890b429c2269224ddc440f07e737b9b5602029c8271040: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:43:24.700664 containerd[1607]: time="2026-01-24T00:43:24.699322658Z" level=info msg="CreateContainer within sandbox \"ed35f71a7fd660e12e78ac4af0479b9e52551a5266b41cca5a251e1aa7ef8769\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"22811ac0ee1d6f3ef77d126e4c3f76ef2b35528b99d9f11385460beeb83748b6\"" Jan 24 00:43:24.701575 containerd[1607]: time="2026-01-24T00:43:24.701374547Z" level=info msg="StartContainer for \"22811ac0ee1d6f3ef77d126e4c3f76ef2b35528b99d9f11385460beeb83748b6\"" Jan 24 00:43:24.703867 containerd[1607]: time="2026-01-24T00:43:24.703062137Z" level=info msg="connecting to shim 22811ac0ee1d6f3ef77d126e4c3f76ef2b35528b99d9f11385460beeb83748b6" address="unix:///run/containerd/s/47067c65300e29f056edddb1d5b5ae16a7f90e28ab4bf538bee53896193a04bc" protocol=ttrpc version=3 Jan 24 00:43:24.703867 containerd[1607]: time="2026-01-24T00:43:24.703735555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b0676e97f4f113e7c6261f20a3f83dcd,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f3b458a0886dbfa46b15525bf8a68277c6f29a7226ec788bcfbd864a1229ae1\"" Jan 24 00:43:24.708780 kubelet[2480]: E0124 00:43:24.708384 2480 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:24.726474 containerd[1607]: time="2026-01-24T00:43:24.726019287Z" level=info msg="CreateContainer within sandbox \"b36800f3b9c3baaafc52ded6093f1c5fa6117d9e2895748ecf86c38800826174\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dd01aaa67fa3b17854890b429c2269224ddc440f07e737b9b5602029c8271040\"" Jan 24 00:43:24.730643 containerd[1607]: time="2026-01-24T00:43:24.729847562Z" level=info msg="StartContainer for \"dd01aaa67fa3b17854890b429c2269224ddc440f07e737b9b5602029c8271040\"" Jan 24 00:43:24.732219 containerd[1607]: time="2026-01-24T00:43:24.731353523Z" level=info msg="CreateContainer within sandbox \"2f3b458a0886dbfa46b15525bf8a68277c6f29a7226ec788bcfbd864a1229ae1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 24 00:43:24.743244 containerd[1607]: time="2026-01-24T00:43:24.742539964Z" level=info msg="connecting to shim dd01aaa67fa3b17854890b429c2269224ddc440f07e737b9b5602029c8271040" address="unix:///run/containerd/s/4a8c8e22193b92eb76adb623c6f9eedce4a631e3460163f87a83dd331b30adea" protocol=ttrpc version=3 Jan 24 00:43:24.758980 systemd[1]: Started cri-containerd-22811ac0ee1d6f3ef77d126e4c3f76ef2b35528b99d9f11385460beeb83748b6.scope - libcontainer container 22811ac0ee1d6f3ef77d126e4c3f76ef2b35528b99d9f11385460beeb83748b6. Jan 24 00:43:24.762072 containerd[1607]: time="2026-01-24T00:43:24.760840542Z" level=info msg="Container 2ed60b8c819622b6335f466f266af638cadb218e6a7c074d01e8f858eeb4ef0f: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:43:24.790964 containerd[1607]: time="2026-01-24T00:43:24.790843011Z" level=info msg="CreateContainer within sandbox \"2f3b458a0886dbfa46b15525bf8a68277c6f29a7226ec788bcfbd864a1229ae1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2ed60b8c819622b6335f466f266af638cadb218e6a7c074d01e8f858eeb4ef0f\"" Jan 24 00:43:24.791985 containerd[1607]: time="2026-01-24T00:43:24.791882352Z" level=info msg="StartContainer for \"2ed60b8c819622b6335f466f266af638cadb218e6a7c074d01e8f858eeb4ef0f\"" Jan 24 00:43:24.791000 audit: BPF prog-id=98 op=LOAD Jan 24 00:43:24.794024 containerd[1607]: time="2026-01-24T00:43:24.793650663Z" level=info msg="connecting to shim 2ed60b8c819622b6335f466f266af638cadb218e6a7c074d01e8f858eeb4ef0f" address="unix:///run/containerd/s/2c98284bf477fd65cdc9a348333b652957b93de750cd1972d69f5671a1805bf2" protocol=ttrpc version=3 Jan 24 00:43:24.793000 audit: BPF prog-id=99 op=LOAD Jan 24 00:43:24.793000 audit[2664]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2544 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:24.793000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232383131616330656531643666336566373764313236653463336637 Jan 24 00:43:24.793000 audit: BPF prog-id=99 op=UNLOAD Jan 24 00:43:24.793000 audit[2664]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2544 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:24.793000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232383131616330656531643666336566373764313236653463336637 Jan 24 00:43:24.794000 audit: BPF prog-id=100 op=LOAD Jan 24 00:43:24.794000 audit[2664]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2544 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:24.794000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232383131616330656531643666336566373764313236653463336637 Jan 24 00:43:24.799000 audit: BPF prog-id=101 op=LOAD Jan 24 00:43:24.799000 audit[2664]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2544 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:24.799000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232383131616330656531643666336566373764313236653463336637 Jan 24 00:43:24.799000 audit: BPF prog-id=101 op=UNLOAD Jan 24 00:43:24.799000 audit[2664]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2544 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:24.799000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232383131616330656531643666336566373764313236653463336637 Jan 24 00:43:24.799000 audit: BPF prog-id=100 op=UNLOAD Jan 24 00:43:24.799000 audit[2664]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2544 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:24.799000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232383131616330656531643666336566373764313236653463336637 Jan 24 00:43:24.799000 audit: BPF prog-id=102 op=LOAD Jan 24 00:43:24.799000 audit[2664]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2544 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:24.799000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232383131616330656531643666336566373764313236653463336637 Jan 24 00:43:24.809720 systemd[1]: Started cri-containerd-dd01aaa67fa3b17854890b429c2269224ddc440f07e737b9b5602029c8271040.scope - libcontainer container dd01aaa67fa3b17854890b429c2269224ddc440f07e737b9b5602029c8271040. Jan 24 00:43:24.853384 systemd[1]: Started cri-containerd-2ed60b8c819622b6335f466f266af638cadb218e6a7c074d01e8f858eeb4ef0f.scope - libcontainer container 2ed60b8c819622b6335f466f266af638cadb218e6a7c074d01e8f858eeb4ef0f. Jan 24 00:43:24.874000 audit: BPF prog-id=103 op=LOAD Jan 24 00:43:24.879000 audit: BPF prog-id=104 op=LOAD Jan 24 00:43:24.879000 audit[2678]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=2535 pid=2678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:24.879000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464303161616136376661336231373835343839306234323963323236 Jan 24 00:43:24.879000 audit: BPF prog-id=104 op=UNLOAD Jan 24 00:43:24.879000 audit[2678]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2535 pid=2678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:24.879000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464303161616136376661336231373835343839306234323963323236 Jan 24 00:43:24.879000 audit: BPF prog-id=105 op=LOAD Jan 24 00:43:24.879000 audit[2678]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=2535 pid=2678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:24.879000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464303161616136376661336231373835343839306234323963323236 Jan 24 00:43:24.879000 audit: BPF prog-id=106 op=LOAD Jan 24 00:43:24.879000 audit[2678]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=2535 pid=2678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:24.879000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464303161616136376661336231373835343839306234323963323236 Jan 24 00:43:24.879000 audit: BPF prog-id=106 op=UNLOAD Jan 24 00:43:24.879000 audit[2678]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2535 pid=2678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:24.879000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464303161616136376661336231373835343839306234323963323236 Jan 24 00:43:24.879000 audit: BPF prog-id=105 op=UNLOAD Jan 24 00:43:24.879000 audit[2678]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2535 pid=2678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:24.879000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464303161616136376661336231373835343839306234323963323236 Jan 24 00:43:24.879000 audit: BPF prog-id=107 op=LOAD Jan 24 00:43:24.879000 audit[2678]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=2535 pid=2678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:24.879000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464303161616136376661336231373835343839306234323963323236 Jan 24 00:43:24.887000 audit: BPF prog-id=108 op=LOAD Jan 24 00:43:24.888000 audit: BPF prog-id=109 op=LOAD Jan 24 00:43:24.888000 audit[2696]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2560 pid=2696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:24.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265643630623863383139363232623633333566343636663236366166 Jan 24 00:43:24.888000 audit: BPF prog-id=109 op=UNLOAD Jan 24 00:43:24.888000 audit[2696]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2560 pid=2696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:24.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265643630623863383139363232623633333566343636663236366166 Jan 24 00:43:24.888000 audit: BPF prog-id=110 op=LOAD Jan 24 00:43:24.888000 audit[2696]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2560 pid=2696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:24.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265643630623863383139363232623633333566343636663236366166 Jan 24 00:43:24.888000 audit: BPF prog-id=111 op=LOAD Jan 24 00:43:24.888000 audit[2696]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2560 pid=2696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:24.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265643630623863383139363232623633333566343636663236366166 Jan 24 00:43:24.888000 audit: BPF prog-id=111 op=UNLOAD Jan 24 00:43:24.888000 audit[2696]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2560 pid=2696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:24.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265643630623863383139363232623633333566343636663236366166 Jan 24 00:43:24.888000 audit: BPF prog-id=110 op=UNLOAD Jan 24 00:43:24.888000 audit[2696]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2560 pid=2696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:24.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265643630623863383139363232623633333566343636663236366166 Jan 24 00:43:24.888000 audit: BPF prog-id=112 op=LOAD Jan 24 00:43:24.888000 audit[2696]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2560 pid=2696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:24.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265643630623863383139363232623633333566343636663236366166 Jan 24 00:43:24.919818 containerd[1607]: time="2026-01-24T00:43:24.919456278Z" level=info msg="StartContainer for \"22811ac0ee1d6f3ef77d126e4c3f76ef2b35528b99d9f11385460beeb83748b6\" returns successfully" Jan 24 00:43:24.983262 containerd[1607]: time="2026-01-24T00:43:24.983211750Z" level=info msg="StartContainer for \"dd01aaa67fa3b17854890b429c2269224ddc440f07e737b9b5602029c8271040\" returns successfully" Jan 24 00:43:24.993819 kubelet[2480]: E0124 00:43:24.993568 2480 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.71:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 24 00:43:25.000359 containerd[1607]: time="2026-01-24T00:43:25.000301980Z" level=info msg="StartContainer for \"2ed60b8c819622b6335f466f266af638cadb218e6a7c074d01e8f858eeb4ef0f\" returns successfully" Jan 24 00:43:25.030221 kubelet[2480]: E0124 00:43:25.029859 2480 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="3.2s" Jan 24 00:43:25.407568 kubelet[2480]: E0124 00:43:25.407483 2480 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:43:25.407739 kubelet[2480]: E0124 00:43:25.407695 2480 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:25.422272 kubelet[2480]: E0124 00:43:25.419739 2480 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:43:25.422272 kubelet[2480]: I0124 00:43:25.420265 2480 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:43:25.430467 kubelet[2480]: E0124 00:43:25.427869 2480 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:43:25.437583 kubelet[2480]: E0124 00:43:25.436824 2480 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:25.439625 kubelet[2480]: E0124 00:43:25.438787 2480 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:26.424251 kubelet[2480]: E0124 00:43:26.423849 2480 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:43:26.424750 kubelet[2480]: E0124 00:43:26.424507 2480 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:43:26.427103 kubelet[2480]: E0124 00:43:26.425664 2480 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:26.431264 kubelet[2480]: E0124 00:43:26.429267 2480 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:27.907063 kubelet[2480]: I0124 00:43:27.906810 2480 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 24 00:43:27.913710 kubelet[2480]: I0124 00:43:27.913683 2480 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 24 00:43:27.942267 kubelet[2480]: E0124 00:43:27.939556 2480 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 24 00:43:27.942267 kubelet[2480]: I0124 00:43:27.939594 2480 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 24 00:43:27.945665 kubelet[2480]: E0124 00:43:27.945589 2480 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 24 00:43:27.945665 kubelet[2480]: I0124 00:43:27.945665 2480 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:43:27.948769 kubelet[2480]: E0124 00:43:27.948741 2480 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:43:27.970521 kubelet[2480]: I0124 00:43:27.970427 2480 apiserver.go:52] "Watching apiserver" Jan 24 00:43:28.015300 kubelet[2480]: I0124 00:43:28.013763 2480 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 24 00:43:30.407859 kubelet[2480]: I0124 00:43:30.404684 2480 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 24 00:43:30.472265 kubelet[2480]: E0124 00:43:30.467015 2480 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:31.084985 kubelet[2480]: I0124 00:43:31.084629 2480 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:43:31.118537 kubelet[2480]: E0124 00:43:31.118426 2480 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:31.439540 kubelet[2480]: E0124 00:43:31.439005 2480 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:31.442095 kubelet[2480]: E0124 00:43:31.441057 2480 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:31.676735 systemd[1]: Reload requested from client PID 2778 ('systemctl') (unit session-8.scope)... Jan 24 00:43:31.676758 systemd[1]: Reloading... Jan 24 00:43:31.980875 zram_generator::config[2822]: No configuration found. Jan 24 00:43:32.406330 kubelet[2480]: I0124 00:43:32.405558 2480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.405535716 podStartE2EDuration="2.405535716s" podCreationTimestamp="2026-01-24 00:43:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:43:32.374790434 +0000 UTC m=+12.301666653" watchObservedRunningTime="2026-01-24 00:43:32.405535716 +0000 UTC m=+12.332411904" Jan 24 00:43:32.531592 systemd[1]: Reloading finished in 854 ms. Jan 24 00:43:32.633367 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:43:32.665776 systemd[1]: kubelet.service: Deactivated successfully. Jan 24 00:43:32.666460 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:43:32.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:43:32.668337 systemd[1]: kubelet.service: Consumed 2.849s CPU time, 128M memory peak. Jan 24 00:43:32.699295 kernel: kauditd_printk_skb: 122 callbacks suppressed Jan 24 00:43:32.699417 kernel: audit: type=1131 audit(1769215412.665:402): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:43:32.681773 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:43:32.677000 audit: BPF prog-id=113 op=LOAD Jan 24 00:43:32.709870 kernel: audit: type=1334 audit(1769215412.677:403): prog-id=113 op=LOAD Jan 24 00:43:32.682000 audit: BPF prog-id=78 op=UNLOAD Jan 24 00:43:32.722563 kernel: audit: type=1334 audit(1769215412.682:404): prog-id=78 op=UNLOAD Jan 24 00:43:32.682000 audit: BPF prog-id=114 op=LOAD Jan 24 00:43:32.740313 kernel: audit: type=1334 audit(1769215412.682:405): prog-id=114 op=LOAD Jan 24 00:43:32.682000 audit: BPF prog-id=115 op=LOAD Jan 24 00:43:32.749403 kernel: audit: type=1334 audit(1769215412.682:406): prog-id=115 op=LOAD Jan 24 00:43:32.682000 audit: BPF prog-id=79 op=UNLOAD Jan 24 00:43:32.761272 kernel: audit: type=1334 audit(1769215412.682:407): prog-id=79 op=UNLOAD Jan 24 00:43:32.682000 audit: BPF prog-id=80 op=UNLOAD Jan 24 00:43:32.772387 kernel: audit: type=1334 audit(1769215412.682:408): prog-id=80 op=UNLOAD Jan 24 00:43:32.682000 audit: BPF prog-id=116 op=LOAD Jan 24 00:43:32.682000 audit: BPF prog-id=72 op=UNLOAD Jan 24 00:43:32.792067 kernel: audit: type=1334 audit(1769215412.682:409): prog-id=116 op=LOAD Jan 24 00:43:32.792241 kernel: audit: type=1334 audit(1769215412.682:410): prog-id=72 op=UNLOAD Jan 24 00:43:32.792283 kernel: audit: type=1334 audit(1769215412.682:411): prog-id=117 op=LOAD Jan 24 00:43:32.682000 audit: BPF prog-id=117 op=LOAD Jan 24 00:43:32.682000 audit: BPF prog-id=118 op=LOAD Jan 24 00:43:32.682000 audit: BPF prog-id=73 op=UNLOAD Jan 24 00:43:32.682000 audit: BPF prog-id=74 op=UNLOAD Jan 24 00:43:32.692000 audit: BPF prog-id=119 op=LOAD Jan 24 00:43:32.692000 audit: BPF prog-id=66 op=UNLOAD Jan 24 00:43:32.692000 audit: BPF prog-id=120 op=LOAD Jan 24 00:43:32.692000 audit: BPF prog-id=121 op=LOAD Jan 24 00:43:32.692000 audit: BPF prog-id=67 op=UNLOAD Jan 24 00:43:32.692000 audit: BPF prog-id=68 op=UNLOAD Jan 24 00:43:32.692000 audit: BPF prog-id=122 op=LOAD Jan 24 00:43:32.692000 audit: BPF prog-id=63 op=UNLOAD Jan 24 00:43:32.692000 audit: BPF prog-id=123 op=LOAD Jan 24 00:43:32.692000 audit: BPF prog-id=124 op=LOAD Jan 24 00:43:32.692000 audit: BPF prog-id=64 op=UNLOAD Jan 24 00:43:32.692000 audit: BPF prog-id=65 op=UNLOAD Jan 24 00:43:32.698000 audit: BPF prog-id=125 op=LOAD Jan 24 00:43:32.698000 audit: BPF prog-id=82 op=UNLOAD Jan 24 00:43:32.726000 audit: BPF prog-id=126 op=LOAD Jan 24 00:43:32.726000 audit: BPF prog-id=69 op=UNLOAD Jan 24 00:43:32.726000 audit: BPF prog-id=127 op=LOAD Jan 24 00:43:32.726000 audit: BPF prog-id=128 op=LOAD Jan 24 00:43:32.726000 audit: BPF prog-id=70 op=UNLOAD Jan 24 00:43:32.726000 audit: BPF prog-id=71 op=UNLOAD Jan 24 00:43:32.726000 audit: BPF prog-id=129 op=LOAD Jan 24 00:43:32.727000 audit: BPF prog-id=130 op=LOAD Jan 24 00:43:32.728000 audit: BPF prog-id=75 op=UNLOAD Jan 24 00:43:32.728000 audit: BPF prog-id=76 op=UNLOAD Jan 24 00:43:32.739000 audit: BPF prog-id=131 op=LOAD Jan 24 00:43:32.739000 audit: BPF prog-id=81 op=UNLOAD Jan 24 00:43:32.740000 audit: BPF prog-id=132 op=LOAD Jan 24 00:43:32.741000 audit: BPF prog-id=77 op=UNLOAD Jan 24 00:43:33.213289 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:43:33.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:43:33.254798 (kubelet)[2869]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:43:33.469433 kubelet[2869]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:43:33.469433 kubelet[2869]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:43:33.469433 kubelet[2869]: I0124 00:43:33.468043 2869 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:43:33.504415 kubelet[2869]: I0124 00:43:33.502558 2869 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 24 00:43:33.504415 kubelet[2869]: I0124 00:43:33.502589 2869 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:43:33.504415 kubelet[2869]: I0124 00:43:33.502803 2869 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 24 00:43:33.504415 kubelet[2869]: I0124 00:43:33.502829 2869 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:43:33.506564 kubelet[2869]: I0124 00:43:33.505438 2869 server.go:956] "Client rotation is on, will bootstrap in background" Jan 24 00:43:33.524713 kubelet[2869]: I0124 00:43:33.524558 2869 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 24 00:43:33.531457 kubelet[2869]: I0124 00:43:33.531405 2869 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:43:33.549656 kubelet[2869]: I0124 00:43:33.549393 2869 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 24 00:43:33.575403 kubelet[2869]: I0124 00:43:33.575375 2869 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 24 00:43:33.575872 kubelet[2869]: I0124 00:43:33.575830 2869 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:43:33.576303 kubelet[2869]: I0124 00:43:33.576018 2869 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:43:33.576496 kubelet[2869]: I0124 00:43:33.576482 2869 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:43:33.576542 kubelet[2869]: I0124 00:43:33.576534 2869 container_manager_linux.go:306] "Creating device plugin manager" Jan 24 00:43:33.576596 kubelet[2869]: I0124 00:43:33.576588 2869 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 24 00:43:33.580394 kubelet[2869]: I0124 00:43:33.580328 2869 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:43:33.580644 kubelet[2869]: I0124 00:43:33.580559 2869 kubelet.go:475] "Attempting to sync node with API server" Jan 24 00:43:33.580711 kubelet[2869]: I0124 00:43:33.580646 2869 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:43:33.580711 kubelet[2869]: I0124 00:43:33.580677 2869 kubelet.go:387] "Adding apiserver pod source" Jan 24 00:43:33.580711 kubelet[2869]: I0124 00:43:33.580698 2869 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:43:33.592243 kubelet[2869]: I0124 00:43:33.591468 2869 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 24 00:43:33.592500 kubelet[2869]: I0124 00:43:33.592481 2869 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 24 00:43:33.593343 kubelet[2869]: I0124 00:43:33.593323 2869 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 24 00:43:33.604109 kubelet[2869]: I0124 00:43:33.604089 2869 server.go:1262] "Started kubelet" Jan 24 00:43:33.605330 kubelet[2869]: I0124 00:43:33.604300 2869 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:43:33.605330 kubelet[2869]: I0124 00:43:33.605277 2869 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 24 00:43:33.605577 kubelet[2869]: I0124 00:43:33.605482 2869 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:43:33.615453 kubelet[2869]: I0124 00:43:33.615325 2869 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:43:33.617100 kubelet[2869]: I0124 00:43:33.616755 2869 server.go:310] "Adding debug handlers to kubelet server" Jan 24 00:43:33.617648 kubelet[2869]: I0124 00:43:33.617431 2869 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:43:33.621298 kubelet[2869]: I0124 00:43:33.618692 2869 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:43:33.627357 kubelet[2869]: I0124 00:43:33.627280 2869 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 24 00:43:33.627421 kubelet[2869]: I0124 00:43:33.627398 2869 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 24 00:43:33.627612 kubelet[2869]: I0124 00:43:33.627539 2869 reconciler.go:29] "Reconciler: start to sync state" Jan 24 00:43:33.629769 kubelet[2869]: I0124 00:43:33.629746 2869 factory.go:223] Registration of the systemd container factory successfully Jan 24 00:43:33.630517 kubelet[2869]: I0124 00:43:33.630491 2869 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:43:33.632755 kubelet[2869]: I0124 00:43:33.632734 2869 factory.go:223] Registration of the containerd container factory successfully Jan 24 00:43:33.633554 kubelet[2869]: E0124 00:43:33.632845 2869 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:43:33.710627 kubelet[2869]: I0124 00:43:33.710511 2869 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 24 00:43:33.735578 kubelet[2869]: I0124 00:43:33.734767 2869 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 24 00:43:33.735578 kubelet[2869]: I0124 00:43:33.734948 2869 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 24 00:43:33.738265 kubelet[2869]: I0124 00:43:33.737806 2869 kubelet.go:2427] "Starting kubelet main sync loop" Jan 24 00:43:33.738581 kubelet[2869]: E0124 00:43:33.738505 2869 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:43:33.777543 kubelet[2869]: I0124 00:43:33.777090 2869 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:43:33.777543 kubelet[2869]: I0124 00:43:33.777108 2869 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:43:33.777543 kubelet[2869]: I0124 00:43:33.777128 2869 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:43:33.777543 kubelet[2869]: I0124 00:43:33.777386 2869 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 24 00:43:33.778045 kubelet[2869]: I0124 00:43:33.777975 2869 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 24 00:43:33.778045 kubelet[2869]: I0124 00:43:33.778007 2869 policy_none.go:49] "None policy: Start" Jan 24 00:43:33.778045 kubelet[2869]: I0124 00:43:33.778020 2869 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 24 00:43:33.778045 kubelet[2869]: I0124 00:43:33.778034 2869 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 24 00:43:33.779501 kubelet[2869]: I0124 00:43:33.778421 2869 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 24 00:43:33.779501 kubelet[2869]: I0124 00:43:33.778434 2869 policy_none.go:47] "Start" Jan 24 00:43:33.801775 kubelet[2869]: E0124 00:43:33.801747 2869 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 24 00:43:33.805835 kubelet[2869]: I0124 00:43:33.804833 2869 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:43:33.806283 kubelet[2869]: I0124 00:43:33.806027 2869 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:43:33.810122 kubelet[2869]: I0124 00:43:33.808383 2869 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:43:33.824585 kubelet[2869]: E0124 00:43:33.822830 2869 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:43:33.844278 kubelet[2869]: I0124 00:43:33.843343 2869 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:43:33.844278 kubelet[2869]: I0124 00:43:33.843434 2869 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 24 00:43:33.847286 kubelet[2869]: I0124 00:43:33.847251 2869 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 24 00:43:33.886127 kubelet[2869]: E0124 00:43:33.885710 2869 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:43:33.888982 kubelet[2869]: E0124 00:43:33.888619 2869 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 24 00:43:33.933287 kubelet[2869]: I0124 00:43:33.932749 2869 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:43:33.964971 kubelet[2869]: I0124 00:43:33.964632 2869 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 24 00:43:33.964971 kubelet[2869]: I0124 00:43:33.964851 2869 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 24 00:43:34.031948 kubelet[2869]: I0124 00:43:34.031081 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b0676e97f4f113e7c6261f20a3f83dcd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b0676e97f4f113e7c6261f20a3f83dcd\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:43:34.031948 kubelet[2869]: I0124 00:43:34.031121 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b0676e97f4f113e7c6261f20a3f83dcd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b0676e97f4f113e7c6261f20a3f83dcd\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:43:34.032326 kubelet[2869]: I0124 00:43:34.032300 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b0676e97f4f113e7c6261f20a3f83dcd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b0676e97f4f113e7c6261f20a3f83dcd\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:43:34.032493 kubelet[2869]: I0124 00:43:34.032473 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:43:34.032592 kubelet[2869]: I0124 00:43:34.032569 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:43:34.032708 kubelet[2869]: I0124 00:43:34.032687 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:43:34.032801 kubelet[2869]: I0124 00:43:34.032781 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 24 00:43:34.032972 kubelet[2869]: I0124 00:43:34.032873 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:43:34.033056 kubelet[2869]: I0124 00:43:34.033039 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:43:34.163485 kubelet[2869]: E0124 00:43:34.163446 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:34.187485 kubelet[2869]: E0124 00:43:34.187018 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:34.191282 kubelet[2869]: E0124 00:43:34.190797 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:34.582351 kubelet[2869]: I0124 00:43:34.581772 2869 apiserver.go:52] "Watching apiserver" Jan 24 00:43:34.628695 kubelet[2869]: I0124 00:43:34.628581 2869 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 24 00:43:34.779020 kubelet[2869]: E0124 00:43:34.778732 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:34.779020 kubelet[2869]: E0124 00:43:34.778998 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:34.779355 kubelet[2869]: E0124 00:43:34.779343 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:34.786827 kubelet[2869]: I0124 00:43:34.786423 2869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.786403414 podStartE2EDuration="1.786403414s" podCreationTimestamp="2026-01-24 00:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:43:34.74597093 +0000 UTC m=+1.465271146" watchObservedRunningTime="2026-01-24 00:43:34.786403414 +0000 UTC m=+1.505703630" Jan 24 00:43:35.781555 kubelet[2869]: E0124 00:43:35.780087 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:35.781555 kubelet[2869]: E0124 00:43:35.780571 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:36.179694 kubelet[2869]: I0124 00:43:36.179658 2869 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 24 00:43:36.181050 containerd[1607]: time="2026-01-24T00:43:36.180784974Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 24 00:43:36.182060 kubelet[2869]: I0124 00:43:36.181968 2869 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 24 00:43:37.189602 systemd[1]: Created slice kubepods-besteffort-pode8f3a2b6_3311_4593_9584_26004d2dbae6.slice - libcontainer container kubepods-besteffort-pode8f3a2b6_3311_4593_9584_26004d2dbae6.slice. Jan 24 00:43:37.268640 kubelet[2869]: I0124 00:43:37.268430 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e8f3a2b6-3311-4593-9584-26004d2dbae6-xtables-lock\") pod \"kube-proxy-lhkvj\" (UID: \"e8f3a2b6-3311-4593-9584-26004d2dbae6\") " pod="kube-system/kube-proxy-lhkvj" Jan 24 00:43:37.268640 kubelet[2869]: I0124 00:43:37.268483 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e8f3a2b6-3311-4593-9584-26004d2dbae6-kube-proxy\") pod \"kube-proxy-lhkvj\" (UID: \"e8f3a2b6-3311-4593-9584-26004d2dbae6\") " pod="kube-system/kube-proxy-lhkvj" Jan 24 00:43:37.268640 kubelet[2869]: I0124 00:43:37.268515 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8f3a2b6-3311-4593-9584-26004d2dbae6-lib-modules\") pod \"kube-proxy-lhkvj\" (UID: \"e8f3a2b6-3311-4593-9584-26004d2dbae6\") " pod="kube-system/kube-proxy-lhkvj" Jan 24 00:43:37.268640 kubelet[2869]: I0124 00:43:37.268541 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnwch\" (UniqueName: \"kubernetes.io/projected/e8f3a2b6-3311-4593-9584-26004d2dbae6-kube-api-access-nnwch\") pod \"kube-proxy-lhkvj\" (UID: \"e8f3a2b6-3311-4593-9584-26004d2dbae6\") " pod="kube-system/kube-proxy-lhkvj" Jan 24 00:43:37.444978 systemd[1]: Created slice kubepods-besteffort-pode6020076_0e53_410c_b5f8_018d509b81e2.slice - libcontainer container kubepods-besteffort-pode6020076_0e53_410c_b5f8_018d509b81e2.slice. Jan 24 00:43:37.471979 kubelet[2869]: I0124 00:43:37.471846 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvxtl\" (UniqueName: \"kubernetes.io/projected/e6020076-0e53-410c-b5f8-018d509b81e2-kube-api-access-cvxtl\") pod \"tigera-operator-65cdcdfd6d-bkbrk\" (UID: \"e6020076-0e53-410c-b5f8-018d509b81e2\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-bkbrk" Jan 24 00:43:37.472119 kubelet[2869]: I0124 00:43:37.471997 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e6020076-0e53-410c-b5f8-018d509b81e2-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-bkbrk\" (UID: \"e6020076-0e53-410c-b5f8-018d509b81e2\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-bkbrk" Jan 24 00:43:37.516477 kubelet[2869]: E0124 00:43:37.515549 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:37.523372 containerd[1607]: time="2026-01-24T00:43:37.523238857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lhkvj,Uid:e8f3a2b6-3311-4593-9584-26004d2dbae6,Namespace:kube-system,Attempt:0,}" Jan 24 00:43:37.700501 containerd[1607]: time="2026-01-24T00:43:37.699388507Z" level=info msg="connecting to shim 5b6a89be30bfa6ac55a38cfcbfc3e55a54633dc0d189360b86ed3f3287c79388" address="unix:///run/containerd/s/02fd97b5de452df0879290eeb0d6cf0cb9e82069b212e3e1b1d43868f03f9f4d" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:43:37.764364 containerd[1607]: time="2026-01-24T00:43:37.764095450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-bkbrk,Uid:e6020076-0e53-410c-b5f8-018d509b81e2,Namespace:tigera-operator,Attempt:0,}" Jan 24 00:43:37.848263 containerd[1607]: time="2026-01-24T00:43:37.847739144Z" level=info msg="connecting to shim fa65a402430abf2d03d2a048de3ce7d931a2ab7c59cfbe2898f8a977debed67e" address="unix:///run/containerd/s/4b33616d312e9410ca3ca5ad975085d20f02a588b41676fdca4bc32440857ae4" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:43:37.878820 systemd[1]: Started cri-containerd-5b6a89be30bfa6ac55a38cfcbfc3e55a54633dc0d189360b86ed3f3287c79388.scope - libcontainer container 5b6a89be30bfa6ac55a38cfcbfc3e55a54633dc0d189360b86ed3f3287c79388. Jan 24 00:43:38.036736 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 24 00:43:38.039394 kernel: audit: type=1334 audit(1769215418.021:444): prog-id=133 op=LOAD Jan 24 00:43:38.021000 audit: BPF prog-id=133 op=LOAD Jan 24 00:43:38.061000 audit: BPF prog-id=134 op=LOAD Jan 24 00:43:38.063456 systemd[1]: Started cri-containerd-fa65a402430abf2d03d2a048de3ce7d931a2ab7c59cfbe2898f8a977debed67e.scope - libcontainer container fa65a402430abf2d03d2a048de3ce7d931a2ab7c59cfbe2898f8a977debed67e. Jan 24 00:43:38.061000 audit[2945]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2933 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:38.097093 kernel: audit: type=1334 audit(1769215418.061:445): prog-id=134 op=LOAD Jan 24 00:43:38.099774 kernel: audit: type=1300 audit(1769215418.061:445): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2933 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:38.099886 kernel: audit: type=1327 audit(1769215418.061:445): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562366138396265333062666136616335356133386366636266633365 Jan 24 00:43:38.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562366138396265333062666136616335356133386366636266633365 Jan 24 00:43:38.130599 kernel: audit: type=1334 audit(1769215418.062:446): prog-id=134 op=UNLOAD Jan 24 00:43:38.062000 audit: BPF prog-id=134 op=UNLOAD Jan 24 00:43:38.062000 audit[2945]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2933 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:38.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562366138396265333062666136616335356133386366636266633365 Jan 24 00:43:38.180650 kernel: audit: type=1300 audit(1769215418.062:446): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2933 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:38.187593 kernel: audit: type=1327 audit(1769215418.062:446): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562366138396265333062666136616335356133386366636266633365 Jan 24 00:43:38.187667 kernel: audit: type=1334 audit(1769215418.062:447): prog-id=135 op=LOAD Jan 24 00:43:38.188042 kernel: audit: type=1300 audit(1769215418.062:447): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2933 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:38.062000 audit: BPF prog-id=135 op=LOAD Jan 24 00:43:38.062000 audit[2945]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2933 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:38.220027 kernel: audit: type=1327 audit(1769215418.062:447): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562366138396265333062666136616335356133386366636266633365 Jan 24 00:43:38.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562366138396265333062666136616335356133386366636266633365 Jan 24 00:43:38.063000 audit: BPF prog-id=136 op=LOAD Jan 24 00:43:38.063000 audit[2945]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2933 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:38.063000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562366138396265333062666136616335356133386366636266633365 Jan 24 00:43:38.076000 audit: BPF prog-id=136 op=UNLOAD Jan 24 00:43:38.076000 audit[2945]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2933 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:38.076000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562366138396265333062666136616335356133386366636266633365 Jan 24 00:43:38.076000 audit: BPF prog-id=135 op=UNLOAD Jan 24 00:43:38.076000 audit[2945]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2933 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:38.076000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562366138396265333062666136616335356133386366636266633365 Jan 24 00:43:38.076000 audit: BPF prog-id=137 op=LOAD Jan 24 00:43:38.076000 audit[2945]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2933 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:38.076000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562366138396265333062666136616335356133386366636266633365 Jan 24 00:43:38.349000 audit: BPF prog-id=138 op=LOAD Jan 24 00:43:38.351000 audit: BPF prog-id=139 op=LOAD Jan 24 00:43:38.351000 audit[2976]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2963 pid=2976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:38.351000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661363561343032343330616266326430336432613034386465336365 Jan 24 00:43:38.351000 audit: BPF prog-id=139 op=UNLOAD Jan 24 00:43:38.351000 audit[2976]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2963 pid=2976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:38.351000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661363561343032343330616266326430336432613034386465336365 Jan 24 00:43:38.353000 audit: BPF prog-id=140 op=LOAD Jan 24 00:43:38.353000 audit[2976]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2963 pid=2976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:38.353000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661363561343032343330616266326430336432613034386465336365 Jan 24 00:43:38.353000 audit: BPF prog-id=141 op=LOAD Jan 24 00:43:38.353000 audit[2976]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2963 pid=2976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:38.353000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661363561343032343330616266326430336432613034386465336365 Jan 24 00:43:38.353000 audit: BPF prog-id=141 op=UNLOAD Jan 24 00:43:38.353000 audit[2976]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2963 pid=2976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:38.353000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661363561343032343330616266326430336432613034386465336365 Jan 24 00:43:38.353000 audit: BPF prog-id=140 op=UNLOAD Jan 24 00:43:38.353000 audit[2976]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2963 pid=2976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:38.353000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661363561343032343330616266326430336432613034386465336365 Jan 24 00:43:38.353000 audit: BPF prog-id=142 op=LOAD Jan 24 00:43:38.353000 audit[2976]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2963 pid=2976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:38.353000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661363561343032343330616266326430336432613034386465336365 Jan 24 00:43:38.457292 containerd[1607]: time="2026-01-24T00:43:38.456701499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lhkvj,Uid:e8f3a2b6-3311-4593-9584-26004d2dbae6,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b6a89be30bfa6ac55a38cfcbfc3e55a54633dc0d189360b86ed3f3287c79388\"" Jan 24 00:43:38.490470 kubelet[2869]: E0124 00:43:38.488389 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:38.610730 containerd[1607]: time="2026-01-24T00:43:38.608663179Z" level=info msg="CreateContainer within sandbox \"5b6a89be30bfa6ac55a38cfcbfc3e55a54633dc0d189360b86ed3f3287c79388\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 24 00:43:38.685380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount549991936.mount: Deactivated successfully. Jan 24 00:43:38.707970 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2651514817.mount: Deactivated successfully. Jan 24 00:43:38.710006 containerd[1607]: time="2026-01-24T00:43:38.709343146Z" level=info msg="Container dc5d20513980c56c1ac2c291424f49f7d5b7006f905e95c5965e834b5d211faa: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:43:38.769263 containerd[1607]: time="2026-01-24T00:43:38.768739349Z" level=info msg="CreateContainer within sandbox \"5b6a89be30bfa6ac55a38cfcbfc3e55a54633dc0d189360b86ed3f3287c79388\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dc5d20513980c56c1ac2c291424f49f7d5b7006f905e95c5965e834b5d211faa\"" Jan 24 00:43:38.772836 containerd[1607]: time="2026-01-24T00:43:38.772791346Z" level=info msg="StartContainer for \"dc5d20513980c56c1ac2c291424f49f7d5b7006f905e95c5965e834b5d211faa\"" Jan 24 00:43:38.778456 containerd[1607]: time="2026-01-24T00:43:38.778419950Z" level=info msg="connecting to shim dc5d20513980c56c1ac2c291424f49f7d5b7006f905e95c5965e834b5d211faa" address="unix:///run/containerd/s/02fd97b5de452df0879290eeb0d6cf0cb9e82069b212e3e1b1d43868f03f9f4d" protocol=ttrpc version=3 Jan 24 00:43:38.847465 containerd[1607]: time="2026-01-24T00:43:38.847066068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-bkbrk,Uid:e6020076-0e53-410c-b5f8-018d509b81e2,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"fa65a402430abf2d03d2a048de3ce7d931a2ab7c59cfbe2898f8a977debed67e\"" Jan 24 00:43:38.852864 containerd[1607]: time="2026-01-24T00:43:38.852536767Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 24 00:43:38.903493 systemd[1]: Started cri-containerd-dc5d20513980c56c1ac2c291424f49f7d5b7006f905e95c5965e834b5d211faa.scope - libcontainer container dc5d20513980c56c1ac2c291424f49f7d5b7006f905e95c5965e834b5d211faa. Jan 24 00:43:39.136000 audit: BPF prog-id=143 op=LOAD Jan 24 00:43:39.136000 audit[3015]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2933 pid=3015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:39.136000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463356432303531333938306335366331616332633239313432346634 Jan 24 00:43:39.136000 audit: BPF prog-id=144 op=LOAD Jan 24 00:43:39.136000 audit[3015]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2933 pid=3015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:39.136000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463356432303531333938306335366331616332633239313432346634 Jan 24 00:43:39.136000 audit: BPF prog-id=144 op=UNLOAD Jan 24 00:43:39.136000 audit[3015]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2933 pid=3015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:39.136000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463356432303531333938306335366331616332633239313432346634 Jan 24 00:43:39.136000 audit: BPF prog-id=143 op=UNLOAD Jan 24 00:43:39.136000 audit[3015]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2933 pid=3015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:39.136000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463356432303531333938306335366331616332633239313432346634 Jan 24 00:43:39.136000 audit: BPF prog-id=145 op=LOAD Jan 24 00:43:39.136000 audit[3015]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2933 pid=3015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:39.136000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463356432303531333938306335366331616332633239313432346634 Jan 24 00:43:39.301763 containerd[1607]: time="2026-01-24T00:43:39.301659172Z" level=info msg="StartContainer for \"dc5d20513980c56c1ac2c291424f49f7d5b7006f905e95c5965e834b5d211faa\" returns successfully" Jan 24 00:43:39.892304 kubelet[2869]: E0124 00:43:39.889991 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:40.774000 audit[3084]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3084 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:43:40.774000 audit[3084]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff96d50d10 a2=0 a3=7fff96d50cfc items=0 ppid=3027 pid=3084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:40.774000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 24 00:43:40.778000 audit[3085]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=3085 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:43:40.778000 audit[3085]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd1264a480 a2=0 a3=7ffd1264a46c items=0 ppid=3027 pid=3085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:40.778000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 24 00:43:40.844000 audit[3088]: NETFILTER_CFG table=nat:56 family=2 entries=1 op=nft_register_chain pid=3088 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:43:40.844000 audit[3088]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe25eadee0 a2=0 a3=7ffe25eadecc items=0 ppid=3027 pid=3088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:40.844000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 24 00:43:40.873000 audit[3092]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_chain pid=3092 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:43:40.873000 audit[3092]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc3593c5d0 a2=0 a3=7ffc3593c5bc items=0 ppid=3027 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:40.873000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 24 00:43:40.874000 audit[3091]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=3091 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:43:40.874000 audit[3091]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe2bc81360 a2=0 a3=7ffe2bc8134c items=0 ppid=3027 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:40.874000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 24 00:43:40.885000 audit[3093]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3093 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:43:40.885000 audit[3093]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff17651ff0 a2=0 a3=7fff17651fdc items=0 ppid=3027 pid=3093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:40.885000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 24 00:43:40.910356 kubelet[2869]: E0124 00:43:40.909457 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:40.962288 kubelet[2869]: E0124 00:43:40.961465 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:41.013000 audit[3094]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3094 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:43:41.013000 audit[3094]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffdabd21000 a2=0 a3=7ffdabd20fec items=0 ppid=3027 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:41.013000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 24 00:43:41.057704 kubelet[2869]: E0124 00:43:41.044061 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:41.115404 kubelet[2869]: I0124 00:43:41.114880 2869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lhkvj" podStartSLOduration=4.114856409 podStartE2EDuration="4.114856409s" podCreationTimestamp="2026-01-24 00:43:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:43:40.010871462 +0000 UTC m=+6.730171678" watchObservedRunningTime="2026-01-24 00:43:41.114856409 +0000 UTC m=+7.834156625" Jan 24 00:43:41.143000 audit[3096]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3096 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:43:41.143000 audit[3096]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe6e188710 a2=0 a3=7ffe6e1886fc items=0 ppid=3027 pid=3096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:41.143000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73002D Jan 24 00:43:41.187000 audit[3099]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3099 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:43:41.187000 audit[3099]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe2e864230 a2=0 a3=7ffe2e86421c items=0 ppid=3027 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:41.187000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Jan 24 00:43:41.225000 audit[3100]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3100 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:43:41.225000 audit[3100]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeab4e0570 a2=0 a3=7ffeab4e055c items=0 ppid=3027 pid=3100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:41.225000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 24 00:43:41.266000 audit[3102]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3102 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:43:41.266000 audit[3102]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc369a3930 a2=0 a3=7ffc369a391c items=0 ppid=3027 pid=3102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:41.266000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 24 00:43:41.294456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3535159872.mount: Deactivated successfully. Jan 24 00:43:41.304000 audit[3103]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3103 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:43:41.304000 audit[3103]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff03cf8140 a2=0 a3=7fff03cf812c items=0 ppid=3027 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:41.304000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Jan 24 00:43:41.350000 audit[3105]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3105 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:43:41.350000 audit[3105]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffce5b23600 a2=0 a3=7ffce5b235ec items=0 ppid=3027 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:41.350000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 24 00:43:41.388000 audit[3112]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3112 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:43:41.388000 audit[3112]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe61027130 a2=0 a3=7ffe6102711c items=0 ppid=3027 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:41.388000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 24 00:43:41.395000 audit[3113]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3113 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:43:41.395000 audit[3113]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff4603c960 a2=0 a3=7fff4603c94c items=0 ppid=3027 pid=3113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:41.395000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Jan 24 00:43:41.412000 audit[3115]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3115 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:43:41.412000 audit[3115]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc3c32d2e0 a2=0 a3=7ffc3c32d2cc items=0 ppid=3027 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:41.412000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 24 00:43:41.419000 audit[3116]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3116 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:43:41.419000 audit[3116]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcd2d1e3b0 a2=0 a3=7ffcd2d1e39c items=0 ppid=3027 pid=3116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:41.419000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 24 00:43:41.446000 audit[3118]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3118 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:43:41.446000 audit[3118]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcc76a9430 a2=0 a3=7ffcc76a941c items=0 ppid=3027 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:41.446000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F5859 Jan 24 00:43:41.516000 audit[3121]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3121 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:43:41.516000 audit[3121]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffafb41360 a2=0 a3=7fffafb4134c items=0 ppid=3027 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:41.516000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Jan 24 00:43:41.577000 audit[3124]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3124 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:43:41.577000 audit[3124]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff1e8eeb80 a2=0 a3=7fff1e8eeb6c items=0 ppid=3027 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:41.577000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Jan 24 00:43:41.585000 audit[3125]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3125 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:43:41.585000 audit[3125]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd5ab63830 a2=0 a3=7ffd5ab6381c items=0 ppid=3027 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:41.585000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Jan 24 00:43:41.611000 audit[3127]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3127 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:43:41.611000 audit[3127]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffeadf0bc30 a2=0 a3=7ffeadf0bc1c items=0 ppid=3027 pid=3127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:41.611000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 24 00:43:41.654000 audit[3130]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3130 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:43:41.654000 audit[3130]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc5f089a60 a2=0 a3=7ffc5f089a4c items=0 ppid=3027 pid=3130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:41.654000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 24 00:43:41.661000 audit[3131]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3131 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:43:41.661000 audit[3131]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffde4f4c40 a2=0 a3=7fffde4f4c2c items=0 ppid=3027 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:41.661000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 24 00:43:41.677000 audit[3133]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3133 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:43:41.677000 audit[3133]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffc9f791c40 a2=0 a3=7ffc9f791c2c items=0 ppid=3027 pid=3133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:41.677000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 24 00:43:41.779000 audit[3139]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3139 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:43:41.779000 audit[3139]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc507a8ee0 a2=0 a3=7ffc507a8ecc items=0 ppid=3027 pid=3139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:41.779000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:43:41.806000 audit[3139]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3139 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:43:41.806000 audit[3139]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffc507a8ee0 a2=0 a3=7ffc507a8ecc items=0 ppid=3027 pid=3139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:41.806000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:43:41.817000 audit[3144]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3144 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:43:41.817000 audit[3144]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd5e878f50 a2=0 a3=7ffd5e878f3c items=0 ppid=3027 pid=3144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:41.817000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 24 00:43:41.844000 audit[3146]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3146 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:43:41.844000 audit[3146]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff8820ee20 a2=0 a3=7fff8820ee0c items=0 ppid=3027 pid=3146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:41.844000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Jan 24 00:43:41.865000 audit[3149]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3149 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:43:41.865000 audit[3149]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff69468d60 a2=0 a3=7fff69468d4c items=0 ppid=3027 pid=3149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:41.865000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C Jan 24 00:43:41.874000 audit[3150]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3150 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:43:41.874000 audit[3150]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff483802b0 a2=0 a3=7fff4838029c items=0 ppid=3027 pid=3150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:41.874000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 24 00:43:41.883000 audit[3152]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3152 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:43:41.883000 audit[3152]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc5c8b2a00 a2=0 a3=7ffc5c8b29ec items=0 ppid=3027 pid=3152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:41.883000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 24 00:43:41.889000 audit[3153]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3153 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:43:41.889000 audit[3153]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffea6399c70 a2=0 a3=7ffea6399c5c items=0 ppid=3027 pid=3153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:41.889000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Jan 24 00:43:41.907000 audit[3155]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3155 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:43:41.907000 audit[3155]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffe9d98440 a2=0 a3=7fffe9d9842c items=0 ppid=3027 pid=3155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:41.907000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 24 00:43:41.944000 audit[3158]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3158 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:43:41.944000 audit[3158]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffd98a3d520 a2=0 a3=7ffd98a3d50c items=0 ppid=3027 pid=3158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:41.944000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 24 00:43:41.960000 audit[3159]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3159 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:43:41.960000 audit[3159]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffed07795e0 a2=0 a3=7ffed07795cc items=0 ppid=3027 pid=3159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:41.960000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Jan 24 00:43:41.993000 audit[3161]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3161 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:43:41.993000 audit[3161]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd07901820 a2=0 a3=7ffd0790180c items=0 ppid=3027 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:41.993000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 24 00:43:42.000000 audit[3162]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3162 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:43:42.000000 audit[3162]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffff8ddac30 a2=0 a3=7ffff8ddac1c items=0 ppid=3027 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:42.000000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 24 00:43:42.018000 audit[3164]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3164 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:43:42.018000 audit[3164]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc5ebb8e00 a2=0 a3=7ffc5ebb8dec items=0 ppid=3027 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:42.018000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Jan 24 00:43:42.082000 audit[3167]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3167 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:43:42.082000 audit[3167]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff6290f370 a2=0 a3=7fff6290f35c items=0 ppid=3027 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:42.082000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Jan 24 00:43:42.124000 audit[3170]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3170 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:43:42.124000 audit[3170]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe848b3ff0 a2=0 a3=7ffe848b3fdc items=0 ppid=3027 pid=3170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:42.124000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D5052 Jan 24 00:43:42.147000 audit[3171]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3171 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:43:42.147000 audit[3171]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffee7c79080 a2=0 a3=7ffee7c7906c items=0 ppid=3027 pid=3171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:42.147000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Jan 24 00:43:42.164000 audit[3173]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3173 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:43:42.164000 audit[3173]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe5d8f6980 a2=0 a3=7ffe5d8f696c items=0 ppid=3027 pid=3173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:42.164000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 24 00:43:42.286000 audit[3176]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3176 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:43:42.286000 audit[3176]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe3595fc80 a2=0 a3=7ffe3595fc6c items=0 ppid=3027 pid=3176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:42.286000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 24 00:43:42.296000 audit[3177]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3177 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:43:42.296000 audit[3177]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd4c802700 a2=0 a3=7ffd4c8026ec items=0 ppid=3027 pid=3177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:42.296000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 24 00:43:42.356000 audit[3179]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3179 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:43:42.356000 audit[3179]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff46033a30 a2=0 a3=7fff46033a1c items=0 ppid=3027 pid=3179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:42.356000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 24 00:43:42.389000 audit[3180]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3180 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:43:42.389000 audit[3180]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcf1baaa50 a2=0 a3=7ffcf1baaa3c items=0 ppid=3027 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:42.389000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 24 00:43:42.419000 audit[3182]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3182 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:43:42.419000 audit[3182]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd3c3597c0 a2=0 a3=7ffd3c3597ac items=0 ppid=3027 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:42.419000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 24 00:43:42.667000 audit[3185]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3185 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:43:42.667000 audit[3185]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffff9c2d520 a2=0 a3=7ffff9c2d50c items=0 ppid=3027 pid=3185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:42.667000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 24 00:43:42.906424 kubelet[2869]: E0124 00:43:42.904254 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:42.961000 audit[3187]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3187 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 24 00:43:42.961000 audit[3187]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffff2847c60 a2=0 a3=7ffff2847c4c items=0 ppid=3027 pid=3187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:42.961000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:43:42.985000 audit[3187]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3187 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 24 00:43:42.985000 audit[3187]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffff2847c60 a2=0 a3=7ffff2847c4c items=0 ppid=3027 pid=3187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:42.985000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:43:43.155337 kubelet[2869]: E0124 00:43:43.149834 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:43.218884 kubelet[2869]: E0124 00:43:43.217347 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:44.074366 kubelet[2869]: E0124 00:43:44.074076 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:45.309358 kubelet[2869]: E0124 00:43:45.307383 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:51.177592 containerd[1607]: time="2026-01-24T00:43:51.176474216Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:43:51.188436 containerd[1607]: time="2026-01-24T00:43:51.187874866Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 24 00:43:51.193552 containerd[1607]: time="2026-01-24T00:43:51.191894724Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:43:51.199768 containerd[1607]: time="2026-01-24T00:43:51.198987513Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:43:51.201813 containerd[1607]: time="2026-01-24T00:43:51.201479127Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 12.348851389s" Jan 24 00:43:51.201813 containerd[1607]: time="2026-01-24T00:43:51.201521516Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 24 00:43:51.230604 containerd[1607]: time="2026-01-24T00:43:51.230555023Z" level=info msg="CreateContainer within sandbox \"fa65a402430abf2d03d2a048de3ce7d931a2ab7c59cfbe2898f8a977debed67e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 24 00:43:51.315050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2022976276.mount: Deactivated successfully. Jan 24 00:43:51.326472 containerd[1607]: time="2026-01-24T00:43:51.326359386Z" level=info msg="Container 4d3e0fdfcf27fc1240e7ff02661418117c1abeee4f6820337c2bd13f8ab76376: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:43:51.354100 containerd[1607]: time="2026-01-24T00:43:51.352049472Z" level=info msg="CreateContainer within sandbox \"fa65a402430abf2d03d2a048de3ce7d931a2ab7c59cfbe2898f8a977debed67e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4d3e0fdfcf27fc1240e7ff02661418117c1abeee4f6820337c2bd13f8ab76376\"" Jan 24 00:43:51.357631 containerd[1607]: time="2026-01-24T00:43:51.355090198Z" level=info msg="StartContainer for \"4d3e0fdfcf27fc1240e7ff02661418117c1abeee4f6820337c2bd13f8ab76376\"" Jan 24 00:43:51.366718 containerd[1607]: time="2026-01-24T00:43:51.366485698Z" level=info msg="connecting to shim 4d3e0fdfcf27fc1240e7ff02661418117c1abeee4f6820337c2bd13f8ab76376" address="unix:///run/containerd/s/4b33616d312e9410ca3ca5ad975085d20f02a588b41676fdca4bc32440857ae4" protocol=ttrpc version=3 Jan 24 00:43:51.500033 systemd[1]: Started cri-containerd-4d3e0fdfcf27fc1240e7ff02661418117c1abeee4f6820337c2bd13f8ab76376.scope - libcontainer container 4d3e0fdfcf27fc1240e7ff02661418117c1abeee4f6820337c2bd13f8ab76376. Jan 24 00:43:51.601000 audit: BPF prog-id=146 op=LOAD Jan 24 00:43:51.609286 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 24 00:43:51.609412 kernel: audit: type=1334 audit(1769215431.601:516): prog-id=146 op=LOAD Jan 24 00:43:51.604000 audit: BPF prog-id=147 op=LOAD Jan 24 00:43:51.637319 kernel: audit: type=1334 audit(1769215431.604:517): prog-id=147 op=LOAD Jan 24 00:43:51.604000 audit[3189]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2963 pid=3189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:51.665426 kernel: audit: type=1300 audit(1769215431.604:517): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2963 pid=3189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:51.604000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464336530666466636632376663313234306537666630323636313431 Jan 24 00:43:51.698415 kernel: audit: type=1327 audit(1769215431.604:517): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464336530666466636632376663313234306537666630323636313431 Jan 24 00:43:51.698584 kernel: audit: type=1334 audit(1769215431.604:518): prog-id=147 op=UNLOAD Jan 24 00:43:51.604000 audit: BPF prog-id=147 op=UNLOAD Jan 24 00:43:51.705555 kernel: audit: type=1300 audit(1769215431.604:518): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2963 pid=3189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:51.604000 audit[3189]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2963 pid=3189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:51.778771 kernel: audit: type=1327 audit(1769215431.604:518): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464336530666466636632376663313234306537666630323636313431 Jan 24 00:43:51.604000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464336530666466636632376663313234306537666630323636313431 Jan 24 00:43:51.604000 audit: BPF prog-id=148 op=LOAD Jan 24 00:43:51.790075 kernel: audit: type=1334 audit(1769215431.604:519): prog-id=148 op=LOAD Jan 24 00:43:51.791555 kernel: audit: type=1300 audit(1769215431.604:519): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2963 pid=3189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:51.604000 audit[3189]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2963 pid=3189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:51.822293 containerd[1607]: time="2026-01-24T00:43:51.819860834Z" level=info msg="StartContainer for \"4d3e0fdfcf27fc1240e7ff02661418117c1abeee4f6820337c2bd13f8ab76376\" returns successfully" Jan 24 00:43:51.847442 kernel: audit: type=1327 audit(1769215431.604:519): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464336530666466636632376663313234306537666630323636313431 Jan 24 00:43:51.604000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464336530666466636632376663313234306537666630323636313431 Jan 24 00:43:51.604000 audit: BPF prog-id=149 op=LOAD Jan 24 00:43:51.604000 audit[3189]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2963 pid=3189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:51.604000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464336530666466636632376663313234306537666630323636313431 Jan 24 00:43:51.604000 audit: BPF prog-id=149 op=UNLOAD Jan 24 00:43:51.604000 audit[3189]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2963 pid=3189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:51.604000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464336530666466636632376663313234306537666630323636313431 Jan 24 00:43:51.604000 audit: BPF prog-id=148 op=UNLOAD Jan 24 00:43:51.604000 audit[3189]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2963 pid=3189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:51.604000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464336530666466636632376663313234306537666630323636313431 Jan 24 00:43:51.604000 audit: BPF prog-id=150 op=LOAD Jan 24 00:43:51.604000 audit[3189]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2963 pid=3189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:43:51.604000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464336530666466636632376663313234306537666630323636313431 Jan 24 00:43:52.840660 kubelet[2869]: I0124 00:43:52.840440 2869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-bkbrk" podStartSLOduration=3.482711607 podStartE2EDuration="15.840420493s" podCreationTimestamp="2026-01-24 00:43:37 +0000 UTC" firstStartedPulling="2026-01-24 00:43:38.849720632 +0000 UTC m=+5.569020858" lastFinishedPulling="2026-01-24 00:43:51.207429528 +0000 UTC m=+17.926729744" observedRunningTime="2026-01-24 00:43:52.839319516 +0000 UTC m=+19.558619762" watchObservedRunningTime="2026-01-24 00:43:52.840420493 +0000 UTC m=+19.559720720" Jan 24 00:44:00.876372 sudo[1815]: pam_unix(sudo:session): session closed for user root Jan 24 00:44:00.900727 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 24 00:44:00.900815 kernel: audit: type=1106 audit(1769215440.875:524): pid=1815 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:44:00.875000 audit[1815]: USER_END pid=1815 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:44:00.906633 kernel: audit: type=1104 audit(1769215440.875:525): pid=1815 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:44:00.875000 audit[1815]: CRED_DISP pid=1815 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:44:00.920079 sshd[1814]: Connection closed by 10.0.0.1 port 51888 Jan 24 00:44:00.922504 sshd-session[1810]: pam_unix(sshd:session): session closed for user core Jan 24 00:44:00.931026 systemd[1]: sshd@6-10.0.0.71:22-10.0.0.1:51888.service: Deactivated successfully. Jan 24 00:44:00.938463 systemd[1]: session-8.scope: Deactivated successfully. Jan 24 00:44:00.939119 systemd[1]: session-8.scope: Consumed 11.106s CPU time, 223.2M memory peak. Jan 24 00:44:00.944867 systemd-logind[1585]: Session 8 logged out. Waiting for processes to exit. Jan 24 00:44:00.947700 systemd-logind[1585]: Removed session 8. Jan 24 00:44:00.921000 audit[1810]: USER_END pid=1810 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:44:00.988563 kernel: audit: type=1106 audit(1769215440.921:526): pid=1810 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:44:00.921000 audit[1810]: CRED_DISP pid=1810 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:44:01.068121 kernel: audit: type=1104 audit(1769215440.921:527): pid=1810 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:44:01.069122 kernel: audit: type=1131 audit(1769215440.926:528): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.71:22-10.0.0.1:51888 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:44:00.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.71:22-10.0.0.1:51888 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:44:02.181000 audit[3284]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3284 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:44:02.226276 kernel: audit: type=1325 audit(1769215442.181:529): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3284 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:44:02.181000 audit[3284]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffffe10c0d0 a2=0 a3=7ffffe10c0bc items=0 ppid=3027 pid=3284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:02.281349 kernel: audit: type=1300 audit(1769215442.181:529): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffffe10c0d0 a2=0 a3=7ffffe10c0bc items=0 ppid=3027 pid=3284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:02.181000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:44:02.248000 audit[3284]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3284 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:44:02.314869 kernel: audit: type=1327 audit(1769215442.181:529): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:44:02.315131 kernel: audit: type=1325 audit(1769215442.248:530): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3284 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:44:02.315319 kernel: audit: type=1300 audit(1769215442.248:530): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffffe10c0d0 a2=0 a3=0 items=0 ppid=3027 pid=3284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:02.248000 audit[3284]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffffe10c0d0 a2=0 a3=0 items=0 ppid=3027 pid=3284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:02.248000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:44:03.340000 audit[3286]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3286 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:44:03.340000 audit[3286]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffeb8347090 a2=0 a3=7ffeb834707c items=0 ppid=3027 pid=3286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:03.340000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:44:03.352000 audit[3286]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3286 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:44:03.352000 audit[3286]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffeb8347090 a2=0 a3=0 items=0 ppid=3027 pid=3286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:03.352000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:44:08.165000 audit[3290]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3290 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:44:08.172759 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 24 00:44:08.172888 kernel: audit: type=1325 audit(1769215448.165:533): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3290 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:44:08.165000 audit[3290]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffe290cfd20 a2=0 a3=7ffe290cfd0c items=0 ppid=3027 pid=3290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:08.218556 kernel: audit: type=1300 audit(1769215448.165:533): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffe290cfd20 a2=0 a3=7ffe290cfd0c items=0 ppid=3027 pid=3290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:08.218660 kernel: audit: type=1327 audit(1769215448.165:533): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:44:08.165000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:44:08.219000 audit[3290]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3290 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:44:08.247870 kernel: audit: type=1325 audit(1769215448.219:534): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3290 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:44:08.248043 kernel: audit: type=1300 audit(1769215448.219:534): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe290cfd20 a2=0 a3=0 items=0 ppid=3027 pid=3290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:08.219000 audit[3290]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe290cfd20 a2=0 a3=0 items=0 ppid=3027 pid=3290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:08.219000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:44:08.294118 kernel: audit: type=1327 audit(1769215448.219:534): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:44:09.329000 audit[3292]: NETFILTER_CFG table=filter:111 family=2 entries=19 op=nft_register_rule pid=3292 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:44:09.349401 kernel: audit: type=1325 audit(1769215449.329:535): table=filter:111 family=2 entries=19 op=nft_register_rule pid=3292 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:44:09.349489 kernel: audit: type=1300 audit(1769215449.329:535): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffe0765f10 a2=0 a3=7fffe0765efc items=0 ppid=3027 pid=3292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:09.329000 audit[3292]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffe0765f10 a2=0 a3=7fffe0765efc items=0 ppid=3027 pid=3292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:09.329000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:44:09.400835 kernel: audit: type=1327 audit(1769215449.329:535): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:44:09.403000 audit[3292]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3292 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:44:09.403000 audit[3292]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffe0765f10 a2=0 a3=0 items=0 ppid=3027 pid=3292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:09.403000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:44:09.423383 kernel: audit: type=1325 audit(1769215449.403:536): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3292 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:44:14.481000 audit[3296]: NETFILTER_CFG table=filter:113 family=2 entries=21 op=nft_register_rule pid=3296 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:44:14.495479 kernel: kauditd_printk_skb: 2 callbacks suppressed Jan 24 00:44:14.496398 kernel: audit: type=1325 audit(1769215454.481:537): table=filter:113 family=2 entries=21 op=nft_register_rule pid=3296 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:44:14.481000 audit[3296]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff38f2fb10 a2=0 a3=7fff38f2fafc items=0 ppid=3027 pid=3296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:14.594332 kernel: audit: type=1300 audit(1769215454.481:537): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff38f2fb10 a2=0 a3=7fff38f2fafc items=0 ppid=3027 pid=3296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:14.481000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:44:14.617947 systemd[1]: Created slice kubepods-besteffort-pod113fd4c2_97de_4464_805f_1a320c120972.slice - libcontainer container kubepods-besteffort-pod113fd4c2_97de_4464_805f_1a320c120972.slice. Jan 24 00:44:14.630415 kernel: audit: type=1327 audit(1769215454.481:537): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:44:14.630476 kernel: audit: type=1325 audit(1769215454.529:538): table=nat:114 family=2 entries=12 op=nft_register_rule pid=3296 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:44:14.529000 audit[3296]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3296 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:44:14.529000 audit[3296]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff38f2fb10 a2=0 a3=0 items=0 ppid=3027 pid=3296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:14.664682 kubelet[2869]: I0124 00:44:14.664563 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/113fd4c2-97de-4464-805f-1a320c120972-tigera-ca-bundle\") pod \"calico-typha-79b59bb85b-5gnqv\" (UID: \"113fd4c2-97de-4464-805f-1a320c120972\") " pod="calico-system/calico-typha-79b59bb85b-5gnqv" Jan 24 00:44:14.664682 kubelet[2869]: I0124 00:44:14.664622 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/113fd4c2-97de-4464-805f-1a320c120972-typha-certs\") pod \"calico-typha-79b59bb85b-5gnqv\" (UID: \"113fd4c2-97de-4464-805f-1a320c120972\") " pod="calico-system/calico-typha-79b59bb85b-5gnqv" Jan 24 00:44:14.664682 kubelet[2869]: I0124 00:44:14.664651 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn8sw\" (UniqueName: \"kubernetes.io/projected/113fd4c2-97de-4464-805f-1a320c120972-kube-api-access-pn8sw\") pod \"calico-typha-79b59bb85b-5gnqv\" (UID: \"113fd4c2-97de-4464-805f-1a320c120972\") " pod="calico-system/calico-typha-79b59bb85b-5gnqv" Jan 24 00:44:14.529000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:44:14.737289 kernel: audit: type=1300 audit(1769215454.529:538): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff38f2fb10 a2=0 a3=0 items=0 ppid=3027 pid=3296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:14.737618 kernel: audit: type=1327 audit(1769215454.529:538): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:44:14.961816 kubelet[2869]: E0124 00:44:14.961769 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:44:14.977727 containerd[1607]: time="2026-01-24T00:44:14.975862233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-79b59bb85b-5gnqv,Uid:113fd4c2-97de-4464-805f-1a320c120972,Namespace:calico-system,Attempt:0,}" Jan 24 00:44:15.128072 containerd[1607]: time="2026-01-24T00:44:15.126594577Z" level=info msg="connecting to shim 50e8e2162722e7433f9f61f5a14d5ed4944fc87fa7be8a3f8a81cf8942de58e2" address="unix:///run/containerd/s/75a2f5c82b3357aa7949c2aa95ca5d983a86c36f321cc6b1e38d869688c33188" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:44:15.317442 systemd[1]: Created slice kubepods-besteffort-podd25e202b_3a55_4d84_9e1c_c0332b09007c.slice - libcontainer container kubepods-besteffort-podd25e202b_3a55_4d84_9e1c_c0332b09007c.slice. Jan 24 00:44:15.378600 kubelet[2869]: I0124 00:44:15.378554 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d25e202b-3a55-4d84-9e1c-c0332b09007c-lib-modules\") pod \"calico-node-lvg2n\" (UID: \"d25e202b-3a55-4d84-9e1c-c0332b09007c\") " pod="calico-system/calico-node-lvg2n" Jan 24 00:44:15.380363 kubelet[2869]: I0124 00:44:15.380333 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d25e202b-3a55-4d84-9e1c-c0332b09007c-node-certs\") pod \"calico-node-lvg2n\" (UID: \"d25e202b-3a55-4d84-9e1c-c0332b09007c\") " pod="calico-system/calico-node-lvg2n" Jan 24 00:44:15.380498 kubelet[2869]: I0124 00:44:15.380482 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d25e202b-3a55-4d84-9e1c-c0332b09007c-policysync\") pod \"calico-node-lvg2n\" (UID: \"d25e202b-3a55-4d84-9e1c-c0332b09007c\") " pod="calico-system/calico-node-lvg2n" Jan 24 00:44:15.380573 kubelet[2869]: I0124 00:44:15.380560 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d25e202b-3a55-4d84-9e1c-c0332b09007c-xtables-lock\") pod \"calico-node-lvg2n\" (UID: \"d25e202b-3a55-4d84-9e1c-c0332b09007c\") " pod="calico-system/calico-node-lvg2n" Jan 24 00:44:15.380659 kubelet[2869]: I0124 00:44:15.380644 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d25e202b-3a55-4d84-9e1c-c0332b09007c-cni-log-dir\") pod \"calico-node-lvg2n\" (UID: \"d25e202b-3a55-4d84-9e1c-c0332b09007c\") " pod="calico-system/calico-node-lvg2n" Jan 24 00:44:15.381054 kubelet[2869]: I0124 00:44:15.380758 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d25e202b-3a55-4d84-9e1c-c0332b09007c-cni-bin-dir\") pod \"calico-node-lvg2n\" (UID: \"d25e202b-3a55-4d84-9e1c-c0332b09007c\") " pod="calico-system/calico-node-lvg2n" Jan 24 00:44:15.381054 kubelet[2869]: I0124 00:44:15.380783 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d25e202b-3a55-4d84-9e1c-c0332b09007c-cni-net-dir\") pod \"calico-node-lvg2n\" (UID: \"d25e202b-3a55-4d84-9e1c-c0332b09007c\") " pod="calico-system/calico-node-lvg2n" Jan 24 00:44:15.381054 kubelet[2869]: I0124 00:44:15.380803 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d25e202b-3a55-4d84-9e1c-c0332b09007c-var-lib-calico\") pod \"calico-node-lvg2n\" (UID: \"d25e202b-3a55-4d84-9e1c-c0332b09007c\") " pod="calico-system/calico-node-lvg2n" Jan 24 00:44:15.381054 kubelet[2869]: I0124 00:44:15.380831 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d25e202b-3a55-4d84-9e1c-c0332b09007c-tigera-ca-bundle\") pod \"calico-node-lvg2n\" (UID: \"d25e202b-3a55-4d84-9e1c-c0332b09007c\") " pod="calico-system/calico-node-lvg2n" Jan 24 00:44:15.381054 kubelet[2869]: I0124 00:44:15.380853 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8pm7\" (UniqueName: \"kubernetes.io/projected/d25e202b-3a55-4d84-9e1c-c0332b09007c-kube-api-access-q8pm7\") pod \"calico-node-lvg2n\" (UID: \"d25e202b-3a55-4d84-9e1c-c0332b09007c\") " pod="calico-system/calico-node-lvg2n" Jan 24 00:44:15.381383 kubelet[2869]: I0124 00:44:15.380875 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d25e202b-3a55-4d84-9e1c-c0332b09007c-flexvol-driver-host\") pod \"calico-node-lvg2n\" (UID: \"d25e202b-3a55-4d84-9e1c-c0332b09007c\") " pod="calico-system/calico-node-lvg2n" Jan 24 00:44:15.381383 kubelet[2869]: I0124 00:44:15.380912 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d25e202b-3a55-4d84-9e1c-c0332b09007c-var-run-calico\") pod \"calico-node-lvg2n\" (UID: \"d25e202b-3a55-4d84-9e1c-c0332b09007c\") " pod="calico-system/calico-node-lvg2n" Jan 24 00:44:15.465867 systemd[1]: Started cri-containerd-50e8e2162722e7433f9f61f5a14d5ed4944fc87fa7be8a3f8a81cf8942de58e2.scope - libcontainer container 50e8e2162722e7433f9f61f5a14d5ed4944fc87fa7be8a3f8a81cf8942de58e2. Jan 24 00:44:15.507279 kubelet[2869]: E0124 00:44:15.506903 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:15.507279 kubelet[2869]: W0124 00:44:15.507072 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:15.507279 kubelet[2869]: E0124 00:44:15.507099 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:15.511345 kubelet[2869]: E0124 00:44:15.510454 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:15.511345 kubelet[2869]: W0124 00:44:15.510473 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:15.511345 kubelet[2869]: E0124 00:44:15.510490 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:15.537349 kubelet[2869]: E0124 00:44:15.536518 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:15.537349 kubelet[2869]: W0124 00:44:15.536542 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:15.537349 kubelet[2869]: E0124 00:44:15.536565 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:15.562351 kubelet[2869]: E0124 00:44:15.562108 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-48xkv" podUID="985a1218-3c37-4f6d-aa83-5ce6fdad91a9" Jan 24 00:44:15.657453 kubelet[2869]: E0124 00:44:15.614892 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:15.660103 kubelet[2869]: W0124 00:44:15.659596 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:15.667646 kubelet[2869]: E0124 00:44:15.663276 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.010363 kubelet[2869]: E0124 00:44:16.002360 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.010363 kubelet[2869]: W0124 00:44:16.002387 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.010363 kubelet[2869]: E0124 00:44:16.002558 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.012547 kubelet[2869]: E0124 00:44:16.011701 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.012547 kubelet[2869]: W0124 00:44:16.011774 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.012547 kubelet[2869]: E0124 00:44:16.011795 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.013620 kubelet[2869]: E0124 00:44:16.013358 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.013620 kubelet[2869]: W0124 00:44:16.013446 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.013620 kubelet[2869]: E0124 00:44:16.013467 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.028540 kubelet[2869]: E0124 00:44:16.026921 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.028540 kubelet[2869]: W0124 00:44:16.027114 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.028540 kubelet[2869]: E0124 00:44:16.027371 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.051607 kubelet[2869]: E0124 00:44:16.031754 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.051607 kubelet[2869]: W0124 00:44:16.031874 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.051607 kubelet[2869]: E0124 00:44:16.031893 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.064914 kubelet[2869]: E0124 00:44:16.064699 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.064914 kubelet[2869]: W0124 00:44:16.064791 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.064914 kubelet[2869]: E0124 00:44:16.064881 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.066543 kubelet[2869]: E0124 00:44:16.065363 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.066543 kubelet[2869]: W0124 00:44:16.065375 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.066543 kubelet[2869]: E0124 00:44:16.065390 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.066543 kubelet[2869]: E0124 00:44:16.065630 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.066543 kubelet[2869]: W0124 00:44:16.065641 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.066543 kubelet[2869]: E0124 00:44:16.065655 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.069721 kubelet[2869]: E0124 00:44:16.069667 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.069721 kubelet[2869]: W0124 00:44:16.069684 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.069721 kubelet[2869]: E0124 00:44:16.069698 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.075826 kubelet[2869]: I0124 00:44:16.075098 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/985a1218-3c37-4f6d-aa83-5ce6fdad91a9-registration-dir\") pod \"csi-node-driver-48xkv\" (UID: \"985a1218-3c37-4f6d-aa83-5ce6fdad91a9\") " pod="calico-system/csi-node-driver-48xkv" Jan 24 00:44:16.082419 kubelet[2869]: E0124 00:44:16.081607 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:44:16.084713 kubelet[2869]: E0124 00:44:16.084410 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.084713 kubelet[2869]: W0124 00:44:16.084428 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.084713 kubelet[2869]: E0124 00:44:16.084453 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.089366 containerd[1607]: time="2026-01-24T00:44:16.089324890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lvg2n,Uid:d25e202b-3a55-4d84-9e1c-c0332b09007c,Namespace:calico-system,Attempt:0,}" Jan 24 00:44:16.094405 kubelet[2869]: E0124 00:44:16.091776 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.094405 kubelet[2869]: W0124 00:44:16.091797 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.094405 kubelet[2869]: E0124 00:44:16.091817 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.095902 kubelet[2869]: E0124 00:44:16.095547 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.095902 kubelet[2869]: W0124 00:44:16.095636 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.095902 kubelet[2869]: E0124 00:44:16.095656 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.105419 kubelet[2869]: E0124 00:44:16.105074 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.105419 kubelet[2869]: W0124 00:44:16.105097 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.105419 kubelet[2869]: E0124 00:44:16.105118 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.105676 kubelet[2869]: I0124 00:44:16.105583 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/985a1218-3c37-4f6d-aa83-5ce6fdad91a9-kubelet-dir\") pod \"csi-node-driver-48xkv\" (UID: \"985a1218-3c37-4f6d-aa83-5ce6fdad91a9\") " pod="calico-system/csi-node-driver-48xkv" Jan 24 00:44:16.109253 kubelet[2869]: E0124 00:44:16.109052 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.109253 kubelet[2869]: W0124 00:44:16.109074 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.109253 kubelet[2869]: E0124 00:44:16.109091 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.112630 kubelet[2869]: E0124 00:44:16.111886 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.112630 kubelet[2869]: W0124 00:44:16.112044 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.112630 kubelet[2869]: E0124 00:44:16.112064 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.112888 kubelet[2869]: E0124 00:44:16.112772 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.112888 kubelet[2869]: W0124 00:44:16.112849 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.112888 kubelet[2869]: E0124 00:44:16.112864 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.113621 kubelet[2869]: E0124 00:44:16.113360 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.113621 kubelet[2869]: W0124 00:44:16.113375 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.113621 kubelet[2869]: E0124 00:44:16.113387 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.153055 kubelet[2869]: E0124 00:44:16.152562 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.153055 kubelet[2869]: W0124 00:44:16.152665 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.153055 kubelet[2869]: E0124 00:44:16.152697 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.154612 kubelet[2869]: E0124 00:44:16.154428 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.154612 kubelet[2869]: W0124 00:44:16.154447 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.154612 kubelet[2869]: E0124 00:44:16.154467 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.160827 kubelet[2869]: E0124 00:44:16.160650 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.160827 kubelet[2869]: W0124 00:44:16.160671 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.160827 kubelet[2869]: E0124 00:44:16.160693 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.165634 kubelet[2869]: E0124 00:44:16.164297 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.165634 kubelet[2869]: W0124 00:44:16.165282 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.165634 kubelet[2869]: E0124 00:44:16.165309 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.164000 audit[3371]: NETFILTER_CFG table=filter:115 family=2 entries=22 op=nft_register_rule pid=3371 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:44:16.199788 kernel: audit: type=1325 audit(1769215456.164:539): table=filter:115 family=2 entries=22 op=nft_register_rule pid=3371 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:44:16.199865 kernel: audit: type=1300 audit(1769215456.164:539): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff8c709aa0 a2=0 a3=7fff8c709a8c items=0 ppid=3027 pid=3371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:16.164000 audit[3371]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff8c709aa0 a2=0 a3=7fff8c709a8c items=0 ppid=3027 pid=3371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:16.202102 kubelet[2869]: E0124 00:44:16.201649 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.202401 kubelet[2869]: W0124 00:44:16.202374 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.202739 kubelet[2869]: E0124 00:44:16.202716 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.216365 kubelet[2869]: E0124 00:44:16.216338 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.216497 kubelet[2869]: W0124 00:44:16.216478 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.216707 kubelet[2869]: E0124 00:44:16.216685 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.258423 kubelet[2869]: E0124 00:44:16.258389 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.262701 kubelet[2869]: W0124 00:44:16.262598 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.264901 kubelet[2869]: E0124 00:44:16.264874 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.164000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:44:16.290685 kubelet[2869]: E0124 00:44:16.286590 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.290685 kubelet[2869]: W0124 00:44:16.286614 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.290685 kubelet[2869]: E0124 00:44:16.286639 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.294269 kubelet[2869]: E0124 00:44:16.293030 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.294269 kubelet[2869]: W0124 00:44:16.293052 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.294269 kubelet[2869]: E0124 00:44:16.293071 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.304468 kernel: audit: type=1327 audit(1769215456.164:539): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:44:16.305067 kernel: audit: type=1325 audit(1769215456.200:540): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3371 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:44:16.200000 audit[3371]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3371 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:44:16.311535 kubelet[2869]: E0124 00:44:16.306555 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.311535 kubelet[2869]: W0124 00:44:16.306584 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.311535 kubelet[2869]: E0124 00:44:16.306610 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.322655 kubelet[2869]: E0124 00:44:16.322558 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.322655 kubelet[2869]: W0124 00:44:16.322648 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.325662 kubelet[2869]: E0124 00:44:16.322675 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.200000 audit[3371]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff8c709aa0 a2=0 a3=0 items=0 ppid=3027 pid=3371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:16.200000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:44:16.210000 audit: BPF prog-id=151 op=LOAD Jan 24 00:44:16.346331 kubelet[2869]: E0124 00:44:16.345036 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.346331 kubelet[2869]: W0124 00:44:16.345065 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.346331 kubelet[2869]: E0124 00:44:16.345091 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.215000 audit: BPF prog-id=152 op=LOAD Jan 24 00:44:16.215000 audit[3318]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3307 pid=3318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:16.215000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530653865323136323732326537343333663966363166356131346435 Jan 24 00:44:16.215000 audit: BPF prog-id=152 op=UNLOAD Jan 24 00:44:16.215000 audit[3318]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3307 pid=3318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:16.347333 kubelet[2869]: E0124 00:44:16.347278 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.347470 kubelet[2869]: W0124 00:44:16.347420 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.347470 kubelet[2869]: E0124 00:44:16.347447 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.215000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530653865323136323732326537343333663966363166356131346435 Jan 24 00:44:16.220000 audit: BPF prog-id=153 op=LOAD Jan 24 00:44:16.220000 audit[3318]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3307 pid=3318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:16.220000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530653865323136323732326537343333663966363166356131346435 Jan 24 00:44:16.220000 audit: BPF prog-id=154 op=LOAD Jan 24 00:44:16.351269 kubelet[2869]: I0124 00:44:16.348562 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/985a1218-3c37-4f6d-aa83-5ce6fdad91a9-varrun\") pod \"csi-node-driver-48xkv\" (UID: \"985a1218-3c37-4f6d-aa83-5ce6fdad91a9\") " pod="calico-system/csi-node-driver-48xkv" Jan 24 00:44:16.220000 audit[3318]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3307 pid=3318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:16.220000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530653865323136323732326537343333663966363166356131346435 Jan 24 00:44:16.220000 audit: BPF prog-id=154 op=UNLOAD Jan 24 00:44:16.220000 audit[3318]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3307 pid=3318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:16.220000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530653865323136323732326537343333663966363166356131346435 Jan 24 00:44:16.220000 audit: BPF prog-id=153 op=UNLOAD Jan 24 00:44:16.220000 audit[3318]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3307 pid=3318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:16.220000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530653865323136323732326537343333663966363166356131346435 Jan 24 00:44:16.220000 audit: BPF prog-id=155 op=LOAD Jan 24 00:44:16.220000 audit[3318]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3307 pid=3318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:16.220000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530653865323136323732326537343333663966363166356131346435 Jan 24 00:44:16.360952 kubelet[2869]: E0124 00:44:16.359849 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.360952 kubelet[2869]: W0124 00:44:16.359949 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.360952 kubelet[2869]: E0124 00:44:16.360054 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.368037 kubelet[2869]: I0124 00:44:16.367843 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmt94\" (UniqueName: \"kubernetes.io/projected/985a1218-3c37-4f6d-aa83-5ce6fdad91a9-kube-api-access-wmt94\") pod \"csi-node-driver-48xkv\" (UID: \"985a1218-3c37-4f6d-aa83-5ce6fdad91a9\") " pod="calico-system/csi-node-driver-48xkv" Jan 24 00:44:16.372872 kubelet[2869]: E0124 00:44:16.372782 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.372949 kubelet[2869]: W0124 00:44:16.372874 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.372949 kubelet[2869]: E0124 00:44:16.372899 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.397415 kubelet[2869]: E0124 00:44:16.393437 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.397415 kubelet[2869]: W0124 00:44:16.393540 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.397415 kubelet[2869]: E0124 00:44:16.393574 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.399942 kubelet[2869]: E0124 00:44:16.399818 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.399942 kubelet[2869]: W0124 00:44:16.399840 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.399942 kubelet[2869]: E0124 00:44:16.399861 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.406579 kubelet[2869]: E0124 00:44:16.405894 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.406579 kubelet[2869]: W0124 00:44:16.405916 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.406579 kubelet[2869]: E0124 00:44:16.406059 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.409377 kubelet[2869]: E0124 00:44:16.408515 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.409481 kubelet[2869]: W0124 00:44:16.408532 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.409651 kubelet[2869]: E0124 00:44:16.409635 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.412779 kubelet[2869]: E0124 00:44:16.412761 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.413849 kubelet[2869]: W0124 00:44:16.413827 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.413933 kubelet[2869]: E0124 00:44:16.413917 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.416278 kubelet[2869]: E0124 00:44:16.415775 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.416278 kubelet[2869]: W0124 00:44:16.415792 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.416278 kubelet[2869]: E0124 00:44:16.415810 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.419638 kubelet[2869]: E0124 00:44:16.416552 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.423574 kubelet[2869]: W0124 00:44:16.419625 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.423574 kubelet[2869]: E0124 00:44:16.421463 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.423574 kubelet[2869]: I0124 00:44:16.422757 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/985a1218-3c37-4f6d-aa83-5ce6fdad91a9-socket-dir\") pod \"csi-node-driver-48xkv\" (UID: \"985a1218-3c37-4f6d-aa83-5ce6fdad91a9\") " pod="calico-system/csi-node-driver-48xkv" Jan 24 00:44:16.425607 kubelet[2869]: E0124 00:44:16.425519 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.425607 kubelet[2869]: W0124 00:44:16.425535 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.425607 kubelet[2869]: E0124 00:44:16.425551 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.442454 kubelet[2869]: E0124 00:44:16.442419 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.442454 kubelet[2869]: W0124 00:44:16.442446 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.443504 kubelet[2869]: E0124 00:44:16.442472 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.452417 kubelet[2869]: E0124 00:44:16.451902 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.452417 kubelet[2869]: W0124 00:44:16.452069 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.452417 kubelet[2869]: E0124 00:44:16.452104 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.453633 kubelet[2869]: E0124 00:44:16.453568 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.453633 kubelet[2869]: W0124 00:44:16.453584 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.453633 kubelet[2869]: E0124 00:44:16.453600 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.455359 kubelet[2869]: E0124 00:44:16.454870 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.455359 kubelet[2869]: W0124 00:44:16.454885 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.455359 kubelet[2869]: E0124 00:44:16.454900 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.455894 kubelet[2869]: E0124 00:44:16.455561 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.455894 kubelet[2869]: W0124 00:44:16.455639 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.455894 kubelet[2869]: E0124 00:44:16.455653 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.523409 containerd[1607]: time="2026-01-24T00:44:16.517648914Z" level=info msg="connecting to shim baf8147eecc32e6465a2b4641b51a47fa3334968e631692742403f9950914415" address="unix:///run/containerd/s/2e9c9d3c1dc5d19c1497c3203c4a7462c3e84feeb0ee2c30bb2ffb37ae4c3e59" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:44:16.541481 kubelet[2869]: E0124 00:44:16.537926 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.541481 kubelet[2869]: W0124 00:44:16.538373 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.541481 kubelet[2869]: E0124 00:44:16.538408 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.541481 kubelet[2869]: E0124 00:44:16.539472 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.541481 kubelet[2869]: W0124 00:44:16.539485 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.541481 kubelet[2869]: E0124 00:44:16.539503 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.541481 kubelet[2869]: E0124 00:44:16.539878 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.541481 kubelet[2869]: W0124 00:44:16.539889 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.541481 kubelet[2869]: E0124 00:44:16.539902 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.544036 kubelet[2869]: E0124 00:44:16.543719 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.546564 kubelet[2869]: W0124 00:44:16.543942 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.546564 kubelet[2869]: E0124 00:44:16.544758 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.553088 kubelet[2869]: E0124 00:44:16.552322 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.553088 kubelet[2869]: W0124 00:44:16.552400 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.553088 kubelet[2869]: E0124 00:44:16.552420 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.555417 kubelet[2869]: E0124 00:44:16.555364 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.555417 kubelet[2869]: W0124 00:44:16.555382 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.555417 kubelet[2869]: E0124 00:44:16.555401 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.558809 kubelet[2869]: E0124 00:44:16.558392 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.559713 kubelet[2869]: W0124 00:44:16.559093 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.559713 kubelet[2869]: E0124 00:44:16.559464 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.563329 kubelet[2869]: E0124 00:44:16.562745 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.563329 kubelet[2869]: W0124 00:44:16.562760 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.563329 kubelet[2869]: E0124 00:44:16.562776 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.564905 kubelet[2869]: E0124 00:44:16.564439 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.564905 kubelet[2869]: W0124 00:44:16.564457 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.564905 kubelet[2869]: E0124 00:44:16.564474 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.570570 kubelet[2869]: E0124 00:44:16.566378 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.570570 kubelet[2869]: W0124 00:44:16.566389 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.570570 kubelet[2869]: E0124 00:44:16.566403 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.570570 kubelet[2869]: E0124 00:44:16.569414 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:44:16.570700 containerd[1607]: time="2026-01-24T00:44:16.565601644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-79b59bb85b-5gnqv,Uid:113fd4c2-97de-4464-805f-1a320c120972,Namespace:calico-system,Attempt:0,} returns sandbox id \"50e8e2162722e7433f9f61f5a14d5ed4944fc87fa7be8a3f8a81cf8942de58e2\"" Jan 24 00:44:16.570746 kubelet[2869]: E0124 00:44:16.570616 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.570746 kubelet[2869]: W0124 00:44:16.570630 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.570746 kubelet[2869]: E0124 00:44:16.570647 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.577288 kubelet[2869]: E0124 00:44:16.573845 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.577288 kubelet[2869]: W0124 00:44:16.573940 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.577288 kubelet[2869]: E0124 00:44:16.574037 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.577288 kubelet[2869]: E0124 00:44:16.576570 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.577288 kubelet[2869]: W0124 00:44:16.576582 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.577288 kubelet[2869]: E0124 00:44:16.576595 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.577629 containerd[1607]: time="2026-01-24T00:44:16.574945693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 24 00:44:16.584688 kubelet[2869]: E0124 00:44:16.583293 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.584688 kubelet[2869]: W0124 00:44:16.583315 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.584688 kubelet[2869]: E0124 00:44:16.583333 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.588870 kubelet[2869]: E0124 00:44:16.588658 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.588954 kubelet[2869]: W0124 00:44:16.588870 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.588954 kubelet[2869]: E0124 00:44:16.588893 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.653528 kubelet[2869]: E0124 00:44:16.648499 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:16.653528 kubelet[2869]: W0124 00:44:16.649303 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:16.653528 kubelet[2869]: E0124 00:44:16.649332 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:16.772703 systemd[1]: Started cri-containerd-baf8147eecc32e6465a2b4641b51a47fa3334968e631692742403f9950914415.scope - libcontainer container baf8147eecc32e6465a2b4641b51a47fa3334968e631692742403f9950914415. Jan 24 00:44:16.819000 audit: BPF prog-id=156 op=LOAD Jan 24 00:44:16.821000 audit: BPF prog-id=157 op=LOAD Jan 24 00:44:16.821000 audit[3444]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=3417 pid=3444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:16.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261663831343765656363333265363436356132623436343162353161 Jan 24 00:44:16.822000 audit: BPF prog-id=157 op=UNLOAD Jan 24 00:44:16.822000 audit[3444]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3417 pid=3444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:16.822000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261663831343765656363333265363436356132623436343162353161 Jan 24 00:44:16.822000 audit: BPF prog-id=158 op=LOAD Jan 24 00:44:16.822000 audit[3444]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3417 pid=3444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:16.822000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261663831343765656363333265363436356132623436343162353161 Jan 24 00:44:16.832000 audit: BPF prog-id=159 op=LOAD Jan 24 00:44:16.832000 audit[3444]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3417 pid=3444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:16.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261663831343765656363333265363436356132623436343162353161 Jan 24 00:44:16.833000 audit: BPF prog-id=159 op=UNLOAD Jan 24 00:44:16.833000 audit[3444]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3417 pid=3444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:16.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261663831343765656363333265363436356132623436343162353161 Jan 24 00:44:16.836000 audit: BPF prog-id=158 op=UNLOAD Jan 24 00:44:16.836000 audit[3444]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3417 pid=3444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:16.836000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261663831343765656363333265363436356132623436343162353161 Jan 24 00:44:16.837000 audit: BPF prog-id=160 op=LOAD Jan 24 00:44:16.837000 audit[3444]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=3417 pid=3444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:16.837000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261663831343765656363333265363436356132623436343162353161 Jan 24 00:44:16.954428 containerd[1607]: time="2026-01-24T00:44:16.954078522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lvg2n,Uid:d25e202b-3a55-4d84-9e1c-c0332b09007c,Namespace:calico-system,Attempt:0,} returns sandbox id \"baf8147eecc32e6465a2b4641b51a47fa3334968e631692742403f9950914415\"" Jan 24 00:44:16.958882 kubelet[2869]: E0124 00:44:16.958698 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:44:17.752334 kubelet[2869]: E0124 00:44:17.751122 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-48xkv" podUID="985a1218-3c37-4f6d-aa83-5ce6fdad91a9" Jan 24 00:44:18.382886 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount127150339.mount: Deactivated successfully. Jan 24 00:44:19.740870 kubelet[2869]: E0124 00:44:19.740764 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-48xkv" podUID="985a1218-3c37-4f6d-aa83-5ce6fdad91a9" Jan 24 00:44:21.750321 kubelet[2869]: E0124 00:44:21.749405 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-48xkv" podUID="985a1218-3c37-4f6d-aa83-5ce6fdad91a9" Jan 24 00:44:22.610806 containerd[1607]: time="2026-01-24T00:44:22.610541944Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:44:22.618932 containerd[1607]: time="2026-01-24T00:44:22.618882029Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 24 00:44:22.623524 containerd[1607]: time="2026-01-24T00:44:22.623326312Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:44:22.642683 containerd[1607]: time="2026-01-24T00:44:22.642643142Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:44:22.647383 containerd[1607]: time="2026-01-24T00:44:22.643464885Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 6.066506739s" Jan 24 00:44:22.647383 containerd[1607]: time="2026-01-24T00:44:22.643841467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 24 00:44:22.655373 containerd[1607]: time="2026-01-24T00:44:22.652700703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 24 00:44:22.745710 containerd[1607]: time="2026-01-24T00:44:22.745085014Z" level=info msg="CreateContainer within sandbox \"50e8e2162722e7433f9f61f5a14d5ed4944fc87fa7be8a3f8a81cf8942de58e2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 24 00:44:22.970535 containerd[1607]: time="2026-01-24T00:44:22.969619260Z" level=info msg="Container 0cd9c9f8e600ba0c28505430ae8cbfd2c056712d99c1e0386b9dfbc9a46c7de4: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:44:23.014645 containerd[1607]: time="2026-01-24T00:44:23.013750757Z" level=info msg="CreateContainer within sandbox \"50e8e2162722e7433f9f61f5a14d5ed4944fc87fa7be8a3f8a81cf8942de58e2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0cd9c9f8e600ba0c28505430ae8cbfd2c056712d99c1e0386b9dfbc9a46c7de4\"" Jan 24 00:44:23.016450 containerd[1607]: time="2026-01-24T00:44:23.014943051Z" level=info msg="StartContainer for \"0cd9c9f8e600ba0c28505430ae8cbfd2c056712d99c1e0386b9dfbc9a46c7de4\"" Jan 24 00:44:23.018307 containerd[1607]: time="2026-01-24T00:44:23.017897653Z" level=info msg="connecting to shim 0cd9c9f8e600ba0c28505430ae8cbfd2c056712d99c1e0386b9dfbc9a46c7de4" address="unix:///run/containerd/s/75a2f5c82b3357aa7949c2aa95ca5d983a86c36f321cc6b1e38d869688c33188" protocol=ttrpc version=3 Jan 24 00:44:23.131522 systemd[1]: Started cri-containerd-0cd9c9f8e600ba0c28505430ae8cbfd2c056712d99c1e0386b9dfbc9a46c7de4.scope - libcontainer container 0cd9c9f8e600ba0c28505430ae8cbfd2c056712d99c1e0386b9dfbc9a46c7de4. Jan 24 00:44:23.319000 audit: BPF prog-id=161 op=LOAD Jan 24 00:44:23.344775 kernel: kauditd_printk_skb: 46 callbacks suppressed Jan 24 00:44:23.344864 kernel: audit: type=1334 audit(1769215463.319:557): prog-id=161 op=LOAD Jan 24 00:44:23.362310 kernel: audit: type=1334 audit(1769215463.322:558): prog-id=162 op=LOAD Jan 24 00:44:23.322000 audit: BPF prog-id=162 op=LOAD Jan 24 00:44:23.322000 audit[3482]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3307 pid=3482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:23.401859 kernel: audit: type=1300 audit(1769215463.322:558): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3307 pid=3482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:23.322000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063643963396638653630306261306332383530353433306165386362 Jan 24 00:44:23.322000 audit: BPF prog-id=162 op=UNLOAD Jan 24 00:44:23.461080 kernel: audit: type=1327 audit(1769215463.322:558): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063643963396638653630306261306332383530353433306165386362 Jan 24 00:44:23.461319 kernel: audit: type=1334 audit(1769215463.322:559): prog-id=162 op=UNLOAD Jan 24 00:44:23.322000 audit[3482]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3307 pid=3482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:23.497452 kernel: audit: type=1300 audit(1769215463.322:559): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3307 pid=3482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:23.322000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063643963396638653630306261306332383530353433306165386362 Jan 24 00:44:23.322000 audit: BPF prog-id=163 op=LOAD Jan 24 00:44:23.543311 kernel: audit: type=1327 audit(1769215463.322:559): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063643963396638653630306261306332383530353433306165386362 Jan 24 00:44:23.543404 kernel: audit: type=1334 audit(1769215463.322:560): prog-id=163 op=LOAD Jan 24 00:44:23.322000 audit[3482]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3307 pid=3482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:23.582494 containerd[1607]: time="2026-01-24T00:44:23.581435300Z" level=info msg="StartContainer for \"0cd9c9f8e600ba0c28505430ae8cbfd2c056712d99c1e0386b9dfbc9a46c7de4\" returns successfully" Jan 24 00:44:23.588494 kernel: audit: type=1300 audit(1769215463.322:560): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3307 pid=3482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:23.588604 kernel: audit: type=1327 audit(1769215463.322:560): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063643963396638653630306261306332383530353433306165386362 Jan 24 00:44:23.322000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063643963396638653630306261306332383530353433306165386362 Jan 24 00:44:23.322000 audit: BPF prog-id=164 op=LOAD Jan 24 00:44:23.322000 audit[3482]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3307 pid=3482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:23.322000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063643963396638653630306261306332383530353433306165386362 Jan 24 00:44:23.322000 audit: BPF prog-id=164 op=UNLOAD Jan 24 00:44:23.322000 audit[3482]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3307 pid=3482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:23.322000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063643963396638653630306261306332383530353433306165386362 Jan 24 00:44:23.322000 audit: BPF prog-id=163 op=UNLOAD Jan 24 00:44:23.322000 audit[3482]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3307 pid=3482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:23.322000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063643963396638653630306261306332383530353433306165386362 Jan 24 00:44:23.322000 audit: BPF prog-id=165 op=LOAD Jan 24 00:44:23.322000 audit[3482]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3307 pid=3482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:23.322000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063643963396638653630306261306332383530353433306165386362 Jan 24 00:44:23.741340 kubelet[2869]: E0124 00:44:23.740824 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-48xkv" podUID="985a1218-3c37-4f6d-aa83-5ce6fdad91a9" Jan 24 00:44:24.658492 kubelet[2869]: E0124 00:44:24.656765 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:44:24.669674 kubelet[2869]: E0124 00:44:24.663725 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:24.675766 kubelet[2869]: W0124 00:44:24.669101 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:24.693979 kubelet[2869]: E0124 00:44:24.683101 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:24.707130 kubelet[2869]: E0124 00:44:24.703610 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:24.707130 kubelet[2869]: W0124 00:44:24.703642 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:24.707130 kubelet[2869]: E0124 00:44:24.703664 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:24.713718 kubelet[2869]: E0124 00:44:24.713437 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:24.713718 kubelet[2869]: W0124 00:44:24.713462 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:24.713718 kubelet[2869]: E0124 00:44:24.713482 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:24.718508 kubelet[2869]: E0124 00:44:24.718489 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:24.719785 kubelet[2869]: W0124 00:44:24.719763 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:24.720114 kubelet[2869]: E0124 00:44:24.719877 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:24.723089 kubelet[2869]: E0124 00:44:24.722988 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:24.723530 kubelet[2869]: W0124 00:44:24.723397 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:24.723530 kubelet[2869]: E0124 00:44:24.723422 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:24.723940 kubelet[2869]: E0124 00:44:24.723924 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:24.724670 kubelet[2869]: W0124 00:44:24.724551 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:24.724670 kubelet[2869]: E0124 00:44:24.724575 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:24.724941 kubelet[2869]: E0124 00:44:24.724926 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:24.725712 kubelet[2869]: W0124 00:44:24.725573 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:24.725712 kubelet[2869]: E0124 00:44:24.725598 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:24.734786 kubelet[2869]: E0124 00:44:24.733879 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:24.734786 kubelet[2869]: W0124 00:44:24.733900 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:24.734786 kubelet[2869]: E0124 00:44:24.733916 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:24.737103 kubelet[2869]: E0124 00:44:24.736825 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:24.737676 kubelet[2869]: W0124 00:44:24.737656 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:24.737758 kubelet[2869]: E0124 00:44:24.737741 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:24.740864 kubelet[2869]: E0124 00:44:24.740728 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:24.740864 kubelet[2869]: W0124 00:44:24.740742 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:24.740864 kubelet[2869]: E0124 00:44:24.740757 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:24.747722 kubelet[2869]: E0124 00:44:24.747499 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:24.751379 kubelet[2869]: W0124 00:44:24.751275 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:24.751451 kubelet[2869]: E0124 00:44:24.751376 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:24.754121 kubelet[2869]: E0124 00:44:24.753772 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:24.754121 kubelet[2869]: W0124 00:44:24.753792 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:24.754121 kubelet[2869]: E0124 00:44:24.753810 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:24.762257 kubelet[2869]: E0124 00:44:24.759589 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:24.762257 kubelet[2869]: W0124 00:44:24.759608 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:24.762257 kubelet[2869]: E0124 00:44:24.759628 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:24.765903 kubelet[2869]: E0124 00:44:24.765793 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:24.765903 kubelet[2869]: W0124 00:44:24.765876 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:24.765903 kubelet[2869]: E0124 00:44:24.765895 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:24.782756 kubelet[2869]: E0124 00:44:24.782572 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:24.782756 kubelet[2869]: W0124 00:44:24.782668 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:24.782756 kubelet[2869]: E0124 00:44:24.782707 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:24.789681 kubelet[2869]: E0124 00:44:24.788722 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:24.789681 kubelet[2869]: W0124 00:44:24.788753 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:24.789681 kubelet[2869]: E0124 00:44:24.788778 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:24.791870 kubelet[2869]: I0124 00:44:24.791684 2869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-79b59bb85b-5gnqv" podStartSLOduration=4.714248847 podStartE2EDuration="10.791669764s" podCreationTimestamp="2026-01-24 00:44:14 +0000 UTC" firstStartedPulling="2026-01-24 00:44:16.571327974 +0000 UTC m=+43.290628200" lastFinishedPulling="2026-01-24 00:44:22.648748901 +0000 UTC m=+49.368049117" observedRunningTime="2026-01-24 00:44:24.772779414 +0000 UTC m=+51.492079659" watchObservedRunningTime="2026-01-24 00:44:24.791669764 +0000 UTC m=+51.510969981" Jan 24 00:44:24.795529 kubelet[2869]: E0124 00:44:24.795444 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:24.795529 kubelet[2869]: W0124 00:44:24.795477 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:24.795529 kubelet[2869]: E0124 00:44:24.795506 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:24.800253 kubelet[2869]: E0124 00:44:24.799326 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:24.800253 kubelet[2869]: W0124 00:44:24.799360 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:24.800253 kubelet[2869]: E0124 00:44:24.799388 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:24.801400 kubelet[2869]: E0124 00:44:24.801319 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:24.801400 kubelet[2869]: W0124 00:44:24.801337 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:24.801400 kubelet[2869]: E0124 00:44:24.801354 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:24.802705 kubelet[2869]: E0124 00:44:24.801852 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:24.802705 kubelet[2869]: W0124 00:44:24.801934 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:24.802705 kubelet[2869]: E0124 00:44:24.801951 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:24.802705 kubelet[2869]: E0124 00:44:24.802647 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:24.802705 kubelet[2869]: W0124 00:44:24.802659 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:24.802705 kubelet[2869]: E0124 00:44:24.802673 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:24.805977 kubelet[2869]: E0124 00:44:24.805362 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:24.805977 kubelet[2869]: W0124 00:44:24.805379 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:24.805977 kubelet[2869]: E0124 00:44:24.805393 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:24.809836 kubelet[2869]: E0124 00:44:24.806286 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:24.809836 kubelet[2869]: W0124 00:44:24.806296 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:24.809836 kubelet[2869]: E0124 00:44:24.806306 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:24.809836 kubelet[2869]: E0124 00:44:24.807550 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:24.809836 kubelet[2869]: W0124 00:44:24.807562 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:24.809836 kubelet[2869]: E0124 00:44:24.807574 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:24.809836 kubelet[2869]: E0124 00:44:24.808543 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:24.809836 kubelet[2869]: W0124 00:44:24.808555 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:24.809836 kubelet[2869]: E0124 00:44:24.808568 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:24.809836 kubelet[2869]: E0124 00:44:24.809627 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:24.810573 kubelet[2869]: W0124 00:44:24.809638 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:24.810573 kubelet[2869]: E0124 00:44:24.809732 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:24.811353 kubelet[2869]: E0124 00:44:24.811124 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:24.812287 kubelet[2869]: W0124 00:44:24.811555 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:24.812287 kubelet[2869]: E0124 00:44:24.812096 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:24.813554 kubelet[2869]: E0124 00:44:24.813117 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:24.813554 kubelet[2869]: W0124 00:44:24.813346 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:24.813554 kubelet[2869]: E0124 00:44:24.813361 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:24.815715 kubelet[2869]: E0124 00:44:24.815273 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:24.815715 kubelet[2869]: W0124 00:44:24.815288 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:24.815715 kubelet[2869]: E0124 00:44:24.815300 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:24.815715 kubelet[2869]: E0124 00:44:24.815899 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:24.815715 kubelet[2869]: W0124 00:44:24.815911 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:24.816724 kubelet[2869]: E0124 00:44:24.816290 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:24.819912 kubelet[2869]: E0124 00:44:24.818129 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:24.819912 kubelet[2869]: W0124 00:44:24.818710 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:24.819912 kubelet[2869]: E0124 00:44:24.818726 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:24.825564 kubelet[2869]: E0124 00:44:24.823540 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:24.825564 kubelet[2869]: W0124 00:44:24.823556 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:24.825564 kubelet[2869]: E0124 00:44:24.823569 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:24.843964 kubelet[2869]: E0124 00:44:24.843794 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:24.844369 kubelet[2869]: W0124 00:44:24.843892 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:24.844426 kubelet[2869]: E0124 00:44:24.844382 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:25.118856 containerd[1607]: time="2026-01-24T00:44:25.118313578Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:44:25.146568 containerd[1607]: time="2026-01-24T00:44:25.146523695Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=2517" Jan 24 00:44:25.163461 containerd[1607]: time="2026-01-24T00:44:25.162407806Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:44:25.179614 containerd[1607]: time="2026-01-24T00:44:25.179477743Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:44:25.189369 containerd[1607]: time="2026-01-24T00:44:25.189110457Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 2.536370811s" Jan 24 00:44:25.190393 containerd[1607]: time="2026-01-24T00:44:25.190121563Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 24 00:44:25.246801 containerd[1607]: time="2026-01-24T00:44:25.246533661Z" level=info msg="CreateContainer within sandbox \"baf8147eecc32e6465a2b4641b51a47fa3334968e631692742403f9950914415\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 24 00:44:25.320884 containerd[1607]: time="2026-01-24T00:44:25.319547647Z" level=info msg="Container 658610c2b2b7e113d1fa938f0f922c5ab543e0da5f62388e2ad8a4ba4ea55e4b: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:44:25.392620 containerd[1607]: time="2026-01-24T00:44:25.391655545Z" level=info msg="CreateContainer within sandbox \"baf8147eecc32e6465a2b4641b51a47fa3334968e631692742403f9950914415\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"658610c2b2b7e113d1fa938f0f922c5ab543e0da5f62388e2ad8a4ba4ea55e4b\"" Jan 24 00:44:25.404088 containerd[1607]: time="2026-01-24T00:44:25.403714735Z" level=info msg="StartContainer for \"658610c2b2b7e113d1fa938f0f922c5ab543e0da5f62388e2ad8a4ba4ea55e4b\"" Jan 24 00:44:25.422865 containerd[1607]: time="2026-01-24T00:44:25.422312804Z" level=info msg="connecting to shim 658610c2b2b7e113d1fa938f0f922c5ab543e0da5f62388e2ad8a4ba4ea55e4b" address="unix:///run/containerd/s/2e9c9d3c1dc5d19c1497c3203c4a7462c3e84feeb0ee2c30bb2ffb37ae4c3e59" protocol=ttrpc version=3 Jan 24 00:44:25.892790 kubelet[2869]: E0124 00:44:25.892522 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-48xkv" podUID="985a1218-3c37-4f6d-aa83-5ce6fdad91a9" Jan 24 00:44:25.909644 systemd[1]: Started cri-containerd-658610c2b2b7e113d1fa938f0f922c5ab543e0da5f62388e2ad8a4ba4ea55e4b.scope - libcontainer container 658610c2b2b7e113d1fa938f0f922c5ab543e0da5f62388e2ad8a4ba4ea55e4b. Jan 24 00:44:26.160804 kubelet[2869]: E0124 00:44:26.145633 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:44:26.213712 kubelet[2869]: E0124 00:44:26.212743 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:26.213712 kubelet[2869]: W0124 00:44:26.213452 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:26.256691 kubelet[2869]: E0124 00:44:26.214128 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:26.257103 kubelet[2869]: E0124 00:44:26.256926 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:26.262753 kubelet[2869]: W0124 00:44:26.257007 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:26.262753 kubelet[2869]: E0124 00:44:26.261585 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:26.288569 kubelet[2869]: E0124 00:44:26.284844 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:26.288569 kubelet[2869]: W0124 00:44:26.287406 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:26.288569 kubelet[2869]: E0124 00:44:26.287915 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:26.298874 kubelet[2869]: E0124 00:44:26.297517 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:26.298874 kubelet[2869]: W0124 00:44:26.297757 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:26.298874 kubelet[2869]: E0124 00:44:26.298349 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:26.303685 kubelet[2869]: E0124 00:44:26.302950 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:26.303685 kubelet[2869]: W0124 00:44:26.302970 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:26.304001 kubelet[2869]: E0124 00:44:26.303827 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:26.380379 kubelet[2869]: E0124 00:44:26.373372 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:26.380379 kubelet[2869]: W0124 00:44:26.373732 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:26.380379 kubelet[2869]: E0124 00:44:26.382685 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:26.502363 kubelet[2869]: E0124 00:44:26.498480 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:26.502363 kubelet[2869]: W0124 00:44:26.498997 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:26.502363 kubelet[2869]: E0124 00:44:26.499508 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:26.558824 kubelet[2869]: E0124 00:44:26.502454 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:26.558824 kubelet[2869]: W0124 00:44:26.502469 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:26.558824 kubelet[2869]: E0124 00:44:26.502490 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:26.558824 kubelet[2869]: E0124 00:44:26.557767 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:26.558824 kubelet[2869]: W0124 00:44:26.558420 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:26.582007 kubelet[2869]: E0124 00:44:26.558963 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:26.701999 kubelet[2869]: E0124 00:44:26.697715 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:26.701999 kubelet[2869]: W0124 00:44:26.700954 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:26.782698 kubelet[2869]: E0124 00:44:26.717828 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:26.937850 kubelet[2869]: E0124 00:44:26.917818 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:26.937850 kubelet[2869]: W0124 00:44:26.922738 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:26.944605 kubelet[2869]: E0124 00:44:26.944496 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.015786 kubelet[2869]: E0124 00:44:26.983782 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.015786 kubelet[2869]: W0124 00:44:26.983897 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.024656 kubelet[2869]: E0124 00:44:27.022682 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.174823 kubelet[2869]: E0124 00:44:27.119891 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.216820 kubelet[2869]: W0124 00:44:27.200427 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.216820 kubelet[2869]: E0124 00:44:27.206696 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.286788 kubelet[2869]: E0124 00:44:27.248400 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.286788 kubelet[2869]: W0124 00:44:27.248753 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.286788 kubelet[2869]: E0124 00:44:27.265756 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.409525 kubelet[2869]: E0124 00:44:27.401868 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.409525 kubelet[2869]: W0124 00:44:27.408950 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.409525 kubelet[2869]: E0124 00:44:27.409589 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.468584 kubelet[2869]: E0124 00:44:27.439672 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:44:27.468584 kubelet[2869]: E0124 00:44:27.461536 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.468584 kubelet[2869]: W0124 00:44:27.461768 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.468584 kubelet[2869]: E0124 00:44:27.462300 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.511973 kubelet[2869]: E0124 00:44:27.510877 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.511973 kubelet[2869]: W0124 00:44:27.511688 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.581927 kubelet[2869]: E0124 00:44:27.519489 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.655897 kubelet[2869]: E0124 00:44:27.621844 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.677450 kubelet[2869]: W0124 00:44:27.676820 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.677450 kubelet[2869]: E0124 00:44:27.677097 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.769473 kubelet[2869]: E0124 00:44:27.705442 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.769473 kubelet[2869]: W0124 00:44:27.737951 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.769473 kubelet[2869]: E0124 00:44:27.739368 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.792625 kubelet[2869]: E0124 00:44:27.785286 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.792625 kubelet[2869]: W0124 00:44:27.785539 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.792625 kubelet[2869]: E0124 00:44:27.785968 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.806602 kubelet[2869]: E0124 00:44:27.804471 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.806602 kubelet[2869]: W0124 00:44:27.804625 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.806602 kubelet[2869]: E0124 00:44:27.804855 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.822782 kubelet[2869]: E0124 00:44:27.822321 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-48xkv" podUID="985a1218-3c37-4f6d-aa83-5ce6fdad91a9" Jan 24 00:44:27.846997 kubelet[2869]: E0124 00:44:27.846835 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.846997 kubelet[2869]: W0124 00:44:27.846935 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.847378 kubelet[2869]: E0124 00:44:27.847112 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.857582 kubelet[2869]: E0124 00:44:27.857394 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.857582 kubelet[2869]: W0124 00:44:27.857417 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.857582 kubelet[2869]: E0124 00:44:27.857441 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.861477 kubelet[2869]: E0124 00:44:27.861318 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.861477 kubelet[2869]: W0124 00:44:27.861414 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.861477 kubelet[2869]: E0124 00:44:27.861435 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.864005 kubelet[2869]: E0124 00:44:27.863723 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.864005 kubelet[2869]: W0124 00:44:27.863739 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.864005 kubelet[2869]: E0124 00:44:27.863754 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.867443 kubelet[2869]: E0124 00:44:27.866958 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.867443 kubelet[2869]: W0124 00:44:27.866977 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.867443 kubelet[2869]: E0124 00:44:27.866992 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.869013 kubelet[2869]: E0124 00:44:27.868704 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.869013 kubelet[2869]: W0124 00:44:27.868796 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.869013 kubelet[2869]: E0124 00:44:27.868817 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.874923 kubelet[2869]: E0124 00:44:27.874691 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.874923 kubelet[2869]: W0124 00:44:27.874709 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.874923 kubelet[2869]: E0124 00:44:27.874726 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.878391 kubelet[2869]: E0124 00:44:27.877341 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.878391 kubelet[2869]: W0124 00:44:27.877725 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.878391 kubelet[2869]: E0124 00:44:27.877740 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.880830 kubelet[2869]: E0124 00:44:27.880442 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.881767 kubelet[2869]: W0124 00:44:27.881573 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.881767 kubelet[2869]: E0124 00:44:27.881664 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.890990 kubelet[2869]: E0124 00:44:27.889753 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.890990 kubelet[2869]: W0124 00:44:27.889837 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.890990 kubelet[2869]: E0124 00:44:27.889859 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.896750 kubelet[2869]: E0124 00:44:27.895438 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.896750 kubelet[2869]: W0124 00:44:27.895461 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.896750 kubelet[2869]: E0124 00:44:27.895476 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.908656 kubelet[2869]: E0124 00:44:27.908508 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.908656 kubelet[2869]: W0124 00:44:27.908608 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.908656 kubelet[2869]: E0124 00:44:27.908630 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.910414 kubelet[2869]: E0124 00:44:27.910390 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.910506 kubelet[2869]: W0124 00:44:27.910490 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.910583 kubelet[2869]: E0124 00:44:27.910567 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.912877 kubelet[2869]: E0124 00:44:27.912863 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.912957 kubelet[2869]: W0124 00:44:27.912941 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.913127 kubelet[2869]: E0124 00:44:27.913010 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.918287 kubelet[2869]: E0124 00:44:27.916694 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.918664 kubelet[2869]: W0124 00:44:27.918355 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.918800 kubelet[2869]: E0124 00:44:27.918781 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.919480 kubelet[2869]: E0124 00:44:27.919461 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.920296 kubelet[2869]: W0124 00:44:27.920280 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.920934 kubelet[2869]: E0124 00:44:27.920351 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.926893 kubelet[2869]: E0124 00:44:27.926667 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.926893 kubelet[2869]: W0124 00:44:27.926803 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.926893 kubelet[2869]: E0124 00:44:27.926844 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.929007 kubelet[2869]: E0124 00:44:27.928910 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.929007 kubelet[2869]: W0124 00:44:27.928930 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.929007 kubelet[2869]: E0124 00:44:27.928946 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.945735 kubelet[2869]: E0124 00:44:27.945716 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.945977 kubelet[2869]: W0124 00:44:27.945823 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.945977 kubelet[2869]: E0124 00:44:27.945843 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.946718 kubelet[2869]: E0124 00:44:27.946699 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.946807 kubelet[2869]: W0124 00:44:27.946789 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.946912 kubelet[2869]: E0124 00:44:27.946895 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.947650 kubelet[2869]: E0124 00:44:27.947635 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.947746 kubelet[2869]: W0124 00:44:27.947730 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.947816 kubelet[2869]: E0124 00:44:27.947801 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.949483 kubelet[2869]: E0124 00:44:27.949465 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.950727 kubelet[2869]: W0124 00:44:27.949691 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.950824 kubelet[2869]: E0124 00:44:27.950803 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.952298 kubelet[2869]: E0124 00:44:27.952281 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.952387 kubelet[2869]: W0124 00:44:27.952370 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.952462 kubelet[2869]: E0124 00:44:27.952448 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.952993 kubelet[2869]: E0124 00:44:27.952821 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.952993 kubelet[2869]: W0124 00:44:27.952837 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.952993 kubelet[2869]: E0124 00:44:27.952851 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.954493 kubelet[2869]: E0124 00:44:27.954477 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.954655 kubelet[2869]: W0124 00:44:27.954573 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.954655 kubelet[2869]: E0124 00:44:27.954592 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.955955 kubelet[2869]: E0124 00:44:27.955939 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.956448 kubelet[2869]: W0124 00:44:27.956115 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.956448 kubelet[2869]: E0124 00:44:27.956274 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.957298 kubelet[2869]: E0124 00:44:27.957282 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.957385 kubelet[2869]: W0124 00:44:27.957370 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.957452 kubelet[2869]: E0124 00:44:27.957437 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.956000 audit[3622]: NETFILTER_CFG table=filter:117 family=2 entries=21 op=nft_register_rule pid=3622 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:44:27.956000 audit[3622]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffde3029200 a2=0 a3=7ffde30291ec items=0 ppid=3027 pid=3622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:27.956000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:44:27.961436 kubelet[2869]: E0124 00:44:27.961420 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.961791 kubelet[2869]: W0124 00:44:27.961705 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.962448 kubelet[2869]: E0124 00:44:27.962430 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.962000 audit[3622]: NETFILTER_CFG table=nat:118 family=2 entries=19 op=nft_register_chain pid=3622 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:44:27.962000 audit[3622]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffde3029200 a2=0 a3=7ffde30291ec items=0 ppid=3027 pid=3622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:27.962000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:44:27.965813 kubelet[2869]: E0124 00:44:27.963423 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.965813 kubelet[2869]: W0124 00:44:27.963745 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.965901 kubelet[2869]: E0124 00:44:27.965337 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.967890 kubelet[2869]: E0124 00:44:27.967814 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.967890 kubelet[2869]: W0124 00:44:27.967830 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.967890 kubelet[2869]: E0124 00:44:27.967844 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.968738 kubelet[2869]: E0124 00:44:27.968444 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.968738 kubelet[2869]: W0124 00:44:27.968458 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.968738 kubelet[2869]: E0124 00:44:27.968472 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.970896 kubelet[2869]: E0124 00:44:27.970640 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.970896 kubelet[2869]: W0124 00:44:27.970731 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.970896 kubelet[2869]: E0124 00:44:27.970745 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.972821 kubelet[2869]: E0124 00:44:27.972526 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.972821 kubelet[2869]: W0124 00:44:27.972613 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.972821 kubelet[2869]: E0124 00:44:27.972630 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.975492 kubelet[2869]: E0124 00:44:27.975377 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.975492 kubelet[2869]: W0124 00:44:27.975466 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.975492 kubelet[2869]: E0124 00:44:27.975492 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.976827 kubelet[2869]: E0124 00:44:27.976728 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.976827 kubelet[2869]: W0124 00:44:27.976803 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.976827 kubelet[2869]: E0124 00:44:27.976822 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.979507 kubelet[2869]: E0124 00:44:27.979431 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.979507 kubelet[2869]: W0124 00:44:27.979456 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.979507 kubelet[2869]: E0124 00:44:27.979479 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.979749 containerd[1607]: time="2026-01-24T00:44:27.979610066Z" level=error msg="get state for 658610c2b2b7e113d1fa938f0f922c5ab543e0da5f62388e2ad8a4ba4ea55e4b" error="context deadline exceeded" Jan 24 00:44:27.979749 containerd[1607]: time="2026-01-24T00:44:27.979650581Z" level=warning msg="unknown status" status=0 Jan 24 00:44:27.981446 kubelet[2869]: E0124 00:44:27.981267 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.981446 kubelet[2869]: W0124 00:44:27.981347 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.981446 kubelet[2869]: E0124 00:44:27.981362 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.983270 kubelet[2869]: E0124 00:44:27.981867 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.983270 kubelet[2869]: W0124 00:44:27.981945 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.983270 kubelet[2869]: E0124 00:44:27.981960 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.984791 kubelet[2869]: E0124 00:44:27.983423 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.984791 kubelet[2869]: W0124 00:44:27.983494 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.984791 kubelet[2869]: E0124 00:44:27.983508 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.984791 kubelet[2869]: E0124 00:44:27.984426 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.984791 kubelet[2869]: W0124 00:44:27.984438 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.984791 kubelet[2869]: E0124 00:44:27.984449 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.984791 kubelet[2869]: E0124 00:44:27.984757 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.984791 kubelet[2869]: W0124 00:44:27.984768 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.984791 kubelet[2869]: E0124 00:44:27.984780 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.986637 kubelet[2869]: E0124 00:44:27.986531 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.986637 kubelet[2869]: W0124 00:44:27.986611 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.986637 kubelet[2869]: E0124 00:44:27.986625 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.993108 kubelet[2869]: E0124 00:44:27.992624 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.993108 kubelet[2869]: W0124 00:44:27.992723 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.993108 kubelet[2869]: E0124 00:44:27.992750 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.994513 kubelet[2869]: E0124 00:44:27.994476 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.994594 kubelet[2869]: W0124 00:44:27.994579 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:27.994856 kubelet[2869]: E0124 00:44:27.994755 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:27.999740 kubelet[2869]: E0124 00:44:27.999721 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:44:27.999841 kubelet[2869]: W0124 00:44:27.999823 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:44:28.000513 kubelet[2869]: E0124 00:44:28.000492 2869 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:44:28.054000 audit: BPF prog-id=166 op=LOAD Jan 24 00:44:28.054000 audit[3561]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=3417 pid=3561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:28.054000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635383631306332623262376531313364316661393338663066393232 Jan 24 00:44:28.054000 audit: BPF prog-id=167 op=LOAD Jan 24 00:44:28.054000 audit[3561]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=3417 pid=3561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:28.054000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635383631306332623262376531313364316661393338663066393232 Jan 24 00:44:28.055000 audit: BPF prog-id=167 op=UNLOAD Jan 24 00:44:28.055000 audit[3561]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3417 pid=3561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:28.055000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635383631306332623262376531313364316661393338663066393232 Jan 24 00:44:28.055000 audit: BPF prog-id=166 op=UNLOAD Jan 24 00:44:28.055000 audit[3561]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3417 pid=3561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:28.055000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635383631306332623262376531313364316661393338663066393232 Jan 24 00:44:28.055000 audit: BPF prog-id=168 op=LOAD Jan 24 00:44:28.055000 audit[3561]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=3417 pid=3561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:28.055000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635383631306332623262376531313364316661393338663066393232 Jan 24 00:44:28.087457 containerd[1607]: time="2026-01-24T00:44:28.084901993Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Jan 24 00:44:28.195755 containerd[1607]: time="2026-01-24T00:44:28.195693330Z" level=info msg="StartContainer for \"658610c2b2b7e113d1fa938f0f922c5ab543e0da5f62388e2ad8a4ba4ea55e4b\" returns successfully" Jan 24 00:44:28.242507 systemd[1]: cri-containerd-658610c2b2b7e113d1fa938f0f922c5ab543e0da5f62388e2ad8a4ba4ea55e4b.scope: Deactivated successfully. Jan 24 00:44:28.244651 systemd[1]: cri-containerd-658610c2b2b7e113d1fa938f0f922c5ab543e0da5f62388e2ad8a4ba4ea55e4b.scope: Consumed 413ms CPU time, 7M memory peak, 1.1M read from disk. Jan 24 00:44:28.248779 containerd[1607]: time="2026-01-24T00:44:28.248736434Z" level=info msg="received container exit event container_id:\"658610c2b2b7e113d1fa938f0f922c5ab543e0da5f62388e2ad8a4ba4ea55e4b\" id:\"658610c2b2b7e113d1fa938f0f922c5ab543e0da5f62388e2ad8a4ba4ea55e4b\" pid:3575 exited_at:{seconds:1769215468 nanos:246357101}" Jan 24 00:44:28.251000 audit: BPF prog-id=168 op=UNLOAD Jan 24 00:44:28.420730 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-658610c2b2b7e113d1fa938f0f922c5ab543e0da5f62388e2ad8a4ba4ea55e4b-rootfs.mount: Deactivated successfully. Jan 24 00:44:28.471933 kubelet[2869]: E0124 00:44:28.466535 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:44:29.485955 kubelet[2869]: E0124 00:44:29.485535 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:44:29.489481 containerd[1607]: time="2026-01-24T00:44:29.488695091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 24 00:44:29.752996 kubelet[2869]: E0124 00:44:29.751398 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-48xkv" podUID="985a1218-3c37-4f6d-aa83-5ce6fdad91a9" Jan 24 00:44:31.739623 kubelet[2869]: E0124 00:44:31.738926 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-48xkv" podUID="985a1218-3c37-4f6d-aa83-5ce6fdad91a9" Jan 24 00:44:33.741597 kubelet[2869]: E0124 00:44:33.741498 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-48xkv" podUID="985a1218-3c37-4f6d-aa83-5ce6fdad91a9" Jan 24 00:44:35.738950 kubelet[2869]: E0124 00:44:35.738716 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-48xkv" podUID="985a1218-3c37-4f6d-aa83-5ce6fdad91a9" Jan 24 00:44:35.848027 containerd[1607]: time="2026-01-24T00:44:35.847780963Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:44:35.850537 containerd[1607]: time="2026-01-24T00:44:35.850389649Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 24 00:44:35.853359 containerd[1607]: time="2026-01-24T00:44:35.852819333Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:44:35.856940 containerd[1607]: time="2026-01-24T00:44:35.856686116Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:44:35.859838 containerd[1607]: time="2026-01-24T00:44:35.857489733Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 6.368746102s" Jan 24 00:44:35.859838 containerd[1607]: time="2026-01-24T00:44:35.857522785Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 24 00:44:35.892479 containerd[1607]: time="2026-01-24T00:44:35.892423464Z" level=info msg="CreateContainer within sandbox \"baf8147eecc32e6465a2b4641b51a47fa3334968e631692742403f9950914415\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 24 00:44:35.930394 containerd[1607]: time="2026-01-24T00:44:35.929959802Z" level=info msg="Container 5d2786c87157570846647ec3db541d75c770cffd236a9b6dd20996010e8947f3: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:44:35.957708 containerd[1607]: time="2026-01-24T00:44:35.957013642Z" level=info msg="CreateContainer within sandbox \"baf8147eecc32e6465a2b4641b51a47fa3334968e631692742403f9950914415\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5d2786c87157570846647ec3db541d75c770cffd236a9b6dd20996010e8947f3\"" Jan 24 00:44:35.958831 containerd[1607]: time="2026-01-24T00:44:35.958674312Z" level=info msg="StartContainer for \"5d2786c87157570846647ec3db541d75c770cffd236a9b6dd20996010e8947f3\"" Jan 24 00:44:35.962867 containerd[1607]: time="2026-01-24T00:44:35.962727425Z" level=info msg="connecting to shim 5d2786c87157570846647ec3db541d75c770cffd236a9b6dd20996010e8947f3" address="unix:///run/containerd/s/2e9c9d3c1dc5d19c1497c3203c4a7462c3e84feeb0ee2c30bb2ffb37ae4c3e59" protocol=ttrpc version=3 Jan 24 00:44:36.039415 systemd[1]: Started cri-containerd-5d2786c87157570846647ec3db541d75c770cffd236a9b6dd20996010e8947f3.scope - libcontainer container 5d2786c87157570846647ec3db541d75c770cffd236a9b6dd20996010e8947f3. Jan 24 00:44:36.259000 audit: BPF prog-id=169 op=LOAD Jan 24 00:44:36.269447 kernel: kauditd_printk_skb: 34 callbacks suppressed Jan 24 00:44:36.269564 kernel: audit: type=1334 audit(1769215476.259:573): prog-id=169 op=LOAD Jan 24 00:44:36.277435 kernel: audit: type=1300 audit(1769215476.259:573): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3417 pid=3693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:36.259000 audit[3693]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3417 pid=3693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:36.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564323738366338373135373537303834363634376563336462353431 Jan 24 00:44:36.354413 kernel: audit: type=1327 audit(1769215476.259:573): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564323738366338373135373537303834363634376563336462353431 Jan 24 00:44:36.354535 kernel: audit: type=1334 audit(1769215476.261:574): prog-id=170 op=LOAD Jan 24 00:44:36.261000 audit: BPF prog-id=170 op=LOAD Jan 24 00:44:36.363358 kernel: audit: type=1300 audit(1769215476.261:574): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3417 pid=3693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:36.261000 audit[3693]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3417 pid=3693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:36.261000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564323738366338373135373537303834363634376563336462353431 Jan 24 00:44:36.261000 audit: BPF prog-id=170 op=UNLOAD Jan 24 00:44:36.436440 kernel: audit: type=1327 audit(1769215476.261:574): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564323738366338373135373537303834363634376563336462353431 Jan 24 00:44:36.436594 kernel: audit: type=1334 audit(1769215476.261:575): prog-id=170 op=UNLOAD Jan 24 00:44:36.436645 kernel: audit: type=1300 audit(1769215476.261:575): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3417 pid=3693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:36.261000 audit[3693]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3417 pid=3693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:36.478330 kernel: audit: type=1327 audit(1769215476.261:575): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564323738366338373135373537303834363634376563336462353431 Jan 24 00:44:36.261000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564323738366338373135373537303834363634376563336462353431 Jan 24 00:44:36.517467 kernel: audit: type=1334 audit(1769215476.261:576): prog-id=169 op=UNLOAD Jan 24 00:44:36.261000 audit: BPF prog-id=169 op=UNLOAD Jan 24 00:44:36.261000 audit[3693]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3417 pid=3693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:36.261000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564323738366338373135373537303834363634376563336462353431 Jan 24 00:44:36.261000 audit: BPF prog-id=171 op=LOAD Jan 24 00:44:36.261000 audit[3693]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3417 pid=3693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:44:36.261000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564323738366338373135373537303834363634376563336462353431 Jan 24 00:44:36.547004 containerd[1607]: time="2026-01-24T00:44:36.546441822Z" level=info msg="StartContainer for \"5d2786c87157570846647ec3db541d75c770cffd236a9b6dd20996010e8947f3\" returns successfully" Jan 24 00:44:37.588475 kubelet[2869]: E0124 00:44:37.586840 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:44:37.740989 kubelet[2869]: E0124 00:44:37.740496 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-48xkv" podUID="985a1218-3c37-4f6d-aa83-5ce6fdad91a9" Jan 24 00:44:38.596820 kubelet[2869]: E0124 00:44:38.596713 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:44:39.135046 systemd[1]: cri-containerd-5d2786c87157570846647ec3db541d75c770cffd236a9b6dd20996010e8947f3.scope: Deactivated successfully. Jan 24 00:44:39.136040 systemd[1]: cri-containerd-5d2786c87157570846647ec3db541d75c770cffd236a9b6dd20996010e8947f3.scope: Consumed 1.737s CPU time, 182.9M memory peak, 5.5M read from disk, 171.3M written to disk. Jan 24 00:44:39.139000 audit: BPF prog-id=171 op=UNLOAD Jan 24 00:44:39.154845 containerd[1607]: time="2026-01-24T00:44:39.154620275Z" level=info msg="received container exit event container_id:\"5d2786c87157570846647ec3db541d75c770cffd236a9b6dd20996010e8947f3\" id:\"5d2786c87157570846647ec3db541d75c770cffd236a9b6dd20996010e8947f3\" pid:3706 exited_at:{seconds:1769215479 nanos:145642163}" Jan 24 00:44:39.239484 kubelet[2869]: I0124 00:44:39.238550 2869 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 24 00:44:39.282823 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d2786c87157570846647ec3db541d75c770cffd236a9b6dd20996010e8947f3-rootfs.mount: Deactivated successfully. Jan 24 00:44:39.437850 systemd[1]: Created slice kubepods-besteffort-pod4771eb7f_3eb7_4b12_836b_87b2639f542e.slice - libcontainer container kubepods-besteffort-pod4771eb7f_3eb7_4b12_836b_87b2639f542e.slice. Jan 24 00:44:39.483538 systemd[1]: Created slice kubepods-besteffort-pod0329b08b_e4ed_4b35_88d7_60baae652219.slice - libcontainer container kubepods-besteffort-pod0329b08b_e4ed_4b35_88d7_60baae652219.slice. Jan 24 00:44:39.488534 kubelet[2869]: I0124 00:44:39.487810 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0329b08b-e4ed-4b35-88d7-60baae652219-config\") pod \"goldmane-7c778bb748-j2nlt\" (UID: \"0329b08b-e4ed-4b35-88d7-60baae652219\") " pod="calico-system/goldmane-7c778bb748-j2nlt" Jan 24 00:44:39.488534 kubelet[2869]: I0124 00:44:39.487865 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9bde79db-04de-4de4-a505-446b3c718db4-config-volume\") pod \"coredns-66bc5c9577-wp69k\" (UID: \"9bde79db-04de-4de4-a505-446b3c718db4\") " pod="kube-system/coredns-66bc5c9577-wp69k" Jan 24 00:44:39.488534 kubelet[2869]: I0124 00:44:39.487890 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr4ln\" (UniqueName: \"kubernetes.io/projected/9bde79db-04de-4de4-a505-446b3c718db4-kube-api-access-nr4ln\") pod \"coredns-66bc5c9577-wp69k\" (UID: \"9bde79db-04de-4de4-a505-446b3c718db4\") " pod="kube-system/coredns-66bc5c9577-wp69k" Jan 24 00:44:39.488534 kubelet[2869]: I0124 00:44:39.487916 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5a82bd01-5299-411a-9329-279ee1a3e6ef-calico-apiserver-certs\") pod \"calico-apiserver-5594cdc7fb-7kk8c\" (UID: \"5a82bd01-5299-411a-9329-279ee1a3e6ef\") " pod="calico-apiserver/calico-apiserver-5594cdc7fb-7kk8c" Jan 24 00:44:39.488534 kubelet[2869]: I0124 00:44:39.487937 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55k5q\" (UniqueName: \"kubernetes.io/projected/4771eb7f-3eb7-4b12-836b-87b2639f542e-kube-api-access-55k5q\") pod \"whisker-5fcb6bc6-gz28r\" (UID: \"4771eb7f-3eb7-4b12-836b-87b2639f542e\") " pod="calico-system/whisker-5fcb6bc6-gz28r" Jan 24 00:44:39.488845 kubelet[2869]: I0124 00:44:39.487963 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0329b08b-e4ed-4b35-88d7-60baae652219-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-j2nlt\" (UID: \"0329b08b-e4ed-4b35-88d7-60baae652219\") " pod="calico-system/goldmane-7c778bb748-j2nlt" Jan 24 00:44:39.488845 kubelet[2869]: I0124 00:44:39.487988 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/0329b08b-e4ed-4b35-88d7-60baae652219-goldmane-key-pair\") pod \"goldmane-7c778bb748-j2nlt\" (UID: \"0329b08b-e4ed-4b35-88d7-60baae652219\") " pod="calico-system/goldmane-7c778bb748-j2nlt" Jan 24 00:44:39.488845 kubelet[2869]: I0124 00:44:39.488009 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcwtk\" (UniqueName: \"kubernetes.io/projected/0329b08b-e4ed-4b35-88d7-60baae652219-kube-api-access-jcwtk\") pod \"goldmane-7c778bb748-j2nlt\" (UID: \"0329b08b-e4ed-4b35-88d7-60baae652219\") " pod="calico-system/goldmane-7c778bb748-j2nlt" Jan 24 00:44:39.488845 kubelet[2869]: I0124 00:44:39.488029 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btfcj\" (UniqueName: \"kubernetes.io/projected/5a82bd01-5299-411a-9329-279ee1a3e6ef-kube-api-access-btfcj\") pod \"calico-apiserver-5594cdc7fb-7kk8c\" (UID: \"5a82bd01-5299-411a-9329-279ee1a3e6ef\") " pod="calico-apiserver/calico-apiserver-5594cdc7fb-7kk8c" Jan 24 00:44:39.488845 kubelet[2869]: I0124 00:44:39.488051 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a4cfce0-c870-4a39-b5a8-35bde17d6784-config-volume\") pod \"coredns-66bc5c9577-7zqd4\" (UID: \"7a4cfce0-c870-4a39-b5a8-35bde17d6784\") " pod="kube-system/coredns-66bc5c9577-7zqd4" Jan 24 00:44:39.489014 kubelet[2869]: I0124 00:44:39.488305 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f244c052-aa71-4ccd-aaea-117d2939edf5-tigera-ca-bundle\") pod \"calico-kube-controllers-5f7d444f9d-54g8g\" (UID: \"f244c052-aa71-4ccd-aaea-117d2939edf5\") " pod="calico-system/calico-kube-controllers-5f7d444f9d-54g8g" Jan 24 00:44:39.489014 kubelet[2869]: I0124 00:44:39.488337 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgvfm\" (UniqueName: \"kubernetes.io/projected/f244c052-aa71-4ccd-aaea-117d2939edf5-kube-api-access-xgvfm\") pod \"calico-kube-controllers-5f7d444f9d-54g8g\" (UID: \"f244c052-aa71-4ccd-aaea-117d2939edf5\") " pod="calico-system/calico-kube-controllers-5f7d444f9d-54g8g" Jan 24 00:44:39.489014 kubelet[2869]: I0124 00:44:39.488368 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h27vj\" (UniqueName: \"kubernetes.io/projected/7a4cfce0-c870-4a39-b5a8-35bde17d6784-kube-api-access-h27vj\") pod \"coredns-66bc5c9577-7zqd4\" (UID: \"7a4cfce0-c870-4a39-b5a8-35bde17d6784\") " pod="kube-system/coredns-66bc5c9577-7zqd4" Jan 24 00:44:39.489014 kubelet[2869]: I0124 00:44:39.488397 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4771eb7f-3eb7-4b12-836b-87b2639f542e-whisker-backend-key-pair\") pod \"whisker-5fcb6bc6-gz28r\" (UID: \"4771eb7f-3eb7-4b12-836b-87b2639f542e\") " pod="calico-system/whisker-5fcb6bc6-gz28r" Jan 24 00:44:39.489014 kubelet[2869]: I0124 00:44:39.488419 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4771eb7f-3eb7-4b12-836b-87b2639f542e-whisker-ca-bundle\") pod \"whisker-5fcb6bc6-gz28r\" (UID: \"4771eb7f-3eb7-4b12-836b-87b2639f542e\") " pod="calico-system/whisker-5fcb6bc6-gz28r" Jan 24 00:44:39.510935 systemd[1]: Created slice kubepods-burstable-pod7a4cfce0_c870_4a39_b5a8_35bde17d6784.slice - libcontainer container kubepods-burstable-pod7a4cfce0_c870_4a39_b5a8_35bde17d6784.slice. Jan 24 00:44:39.529487 systemd[1]: Created slice kubepods-burstable-pod9bde79db_04de_4de4_a505_446b3c718db4.slice - libcontainer container kubepods-burstable-pod9bde79db_04de_4de4_a505_446b3c718db4.slice. Jan 24 00:44:39.557030 systemd[1]: Created slice kubepods-besteffort-podf244c052_aa71_4ccd_aaea_117d2939edf5.slice - libcontainer container kubepods-besteffort-podf244c052_aa71_4ccd_aaea_117d2939edf5.slice. Jan 24 00:44:39.581948 systemd[1]: Created slice kubepods-besteffort-pod5a82bd01_5299_411a_9329_279ee1a3e6ef.slice - libcontainer container kubepods-besteffort-pod5a82bd01_5299_411a_9329_279ee1a3e6ef.slice. Jan 24 00:44:39.599849 systemd[1]: Created slice kubepods-besteffort-pod050a17cf_0e04_46c0_ad64_4ce3987ef3d5.slice - libcontainer container kubepods-besteffort-pod050a17cf_0e04_46c0_ad64_4ce3987ef3d5.slice. Jan 24 00:44:39.604258 kubelet[2869]: I0124 00:44:39.603759 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/050a17cf-0e04-46c0-ad64-4ce3987ef3d5-calico-apiserver-certs\") pod \"calico-apiserver-5594cdc7fb-c8l7f\" (UID: \"050a17cf-0e04-46c0-ad64-4ce3987ef3d5\") " pod="calico-apiserver/calico-apiserver-5594cdc7fb-c8l7f" Jan 24 00:44:39.604258 kubelet[2869]: I0124 00:44:39.603930 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6k52\" (UniqueName: \"kubernetes.io/projected/050a17cf-0e04-46c0-ad64-4ce3987ef3d5-kube-api-access-z6k52\") pod \"calico-apiserver-5594cdc7fb-c8l7f\" (UID: \"050a17cf-0e04-46c0-ad64-4ce3987ef3d5\") " pod="calico-apiserver/calico-apiserver-5594cdc7fb-c8l7f" Jan 24 00:44:39.683860 kubelet[2869]: E0124 00:44:39.682929 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:44:39.696660 containerd[1607]: time="2026-01-24T00:44:39.694961027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 24 00:44:39.762890 systemd[1]: Created slice kubepods-besteffort-pod985a1218_3c37_4f6d_aa83_5ce6fdad91a9.slice - libcontainer container kubepods-besteffort-pod985a1218_3c37_4f6d_aa83_5ce6fdad91a9.slice. Jan 24 00:44:39.788837 containerd[1607]: time="2026-01-24T00:44:39.787956185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5fcb6bc6-gz28r,Uid:4771eb7f-3eb7-4b12-836b-87b2639f542e,Namespace:calico-system,Attempt:0,}" Jan 24 00:44:39.803719 containerd[1607]: time="2026-01-24T00:44:39.803665584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-48xkv,Uid:985a1218-3c37-4f6d-aa83-5ce6fdad91a9,Namespace:calico-system,Attempt:0,}" Jan 24 00:44:39.820783 containerd[1607]: time="2026-01-24T00:44:39.820714747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-j2nlt,Uid:0329b08b-e4ed-4b35-88d7-60baae652219,Namespace:calico-system,Attempt:0,}" Jan 24 00:44:39.839449 kubelet[2869]: E0124 00:44:39.838002 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:44:39.839577 containerd[1607]: time="2026-01-24T00:44:39.839323797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7zqd4,Uid:7a4cfce0-c870-4a39-b5a8-35bde17d6784,Namespace:kube-system,Attempt:0,}" Jan 24 00:44:39.872337 kubelet[2869]: E0124 00:44:39.871381 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:44:39.877499 containerd[1607]: time="2026-01-24T00:44:39.875807245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wp69k,Uid:9bde79db-04de-4de4-a505-446b3c718db4,Namespace:kube-system,Attempt:0,}" Jan 24 00:44:39.882287 containerd[1607]: time="2026-01-24T00:44:39.881391494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f7d444f9d-54g8g,Uid:f244c052-aa71-4ccd-aaea-117d2939edf5,Namespace:calico-system,Attempt:0,}" Jan 24 00:44:39.904024 containerd[1607]: time="2026-01-24T00:44:39.903773062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5594cdc7fb-7kk8c,Uid:5a82bd01-5299-411a-9329-279ee1a3e6ef,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:44:39.975025 containerd[1607]: time="2026-01-24T00:44:39.973922168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5594cdc7fb-c8l7f,Uid:050a17cf-0e04-46c0-ad64-4ce3987ef3d5,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:44:40.574683 containerd[1607]: time="2026-01-24T00:44:40.574630031Z" level=error msg="Failed to destroy network for sandbox \"d810472549a2c938137425fbd5efbce0ae6f61d70b0b0e70f0d426e4ad59d538\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:44:40.581591 systemd[1]: run-netns-cni\x2dbff34c9f\x2dcaef\x2d158d\x2d8dea\x2d9d8f6dc64e92.mount: Deactivated successfully. Jan 24 00:44:40.607633 containerd[1607]: time="2026-01-24T00:44:40.606803734Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-48xkv,Uid:985a1218-3c37-4f6d-aa83-5ce6fdad91a9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d810472549a2c938137425fbd5efbce0ae6f61d70b0b0e70f0d426e4ad59d538\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:44:40.612806 kubelet[2869]: E0124 00:44:40.611645 2869 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d810472549a2c938137425fbd5efbce0ae6f61d70b0b0e70f0d426e4ad59d538\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:44:40.612806 kubelet[2869]: E0124 00:44:40.611788 2869 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d810472549a2c938137425fbd5efbce0ae6f61d70b0b0e70f0d426e4ad59d538\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-48xkv" Jan 24 00:44:40.612806 kubelet[2869]: E0124 00:44:40.611809 2869 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d810472549a2c938137425fbd5efbce0ae6f61d70b0b0e70f0d426e4ad59d538\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-48xkv" Jan 24 00:44:40.614610 kubelet[2869]: E0124 00:44:40.611857 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-48xkv_calico-system(985a1218-3c37-4f6d-aa83-5ce6fdad91a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-48xkv_calico-system(985a1218-3c37-4f6d-aa83-5ce6fdad91a9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d810472549a2c938137425fbd5efbce0ae6f61d70b0b0e70f0d426e4ad59d538\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-48xkv" podUID="985a1218-3c37-4f6d-aa83-5ce6fdad91a9" Jan 24 00:44:40.614815 containerd[1607]: time="2026-01-24T00:44:40.614322633Z" level=error msg="Failed to destroy network for sandbox \"62e03290c32b51004114136f1d301c9dc9e3947b23c30003da323f37a0f7793e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:44:40.628941 systemd[1]: run-netns-cni\x2d03e6d613\x2d7c8e\x2dce21\x2d26b4\x2d3c7cb349d5f6.mount: Deactivated successfully. Jan 24 00:44:40.722879 containerd[1607]: time="2026-01-24T00:44:40.722658365Z" level=error msg="Failed to destroy network for sandbox \"c3c5ebb1711627dcd4a5b29fda17803ef8f07a93846aa6426706a7e0787ba447\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:44:40.729290 systemd[1]: run-netns-cni\x2db2018df5\x2da1fd\x2d385d\x2d047e\x2def8ed318bfcc.mount: Deactivated successfully. Jan 24 00:44:40.748566 containerd[1607]: time="2026-01-24T00:44:40.748517682Z" level=error msg="Failed to destroy network for sandbox \"f43d475c1138cfcafae051b6a902b13cd005ad786aee58f369dcde73a1ef299b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:44:40.755837 systemd[1]: run-netns-cni\x2df437b02c\x2d42f9\x2d134a\x2dbe65\x2da6adac94f4d4.mount: Deactivated successfully. Jan 24 00:44:40.763975 containerd[1607]: time="2026-01-24T00:44:40.763640888Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-j2nlt,Uid:0329b08b-e4ed-4b35-88d7-60baae652219,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"62e03290c32b51004114136f1d301c9dc9e3947b23c30003da323f37a0f7793e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:44:40.765914 kubelet[2869]: E0124 00:44:40.765857 2869 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62e03290c32b51004114136f1d301c9dc9e3947b23c30003da323f37a0f7793e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:44:40.765987 kubelet[2869]: E0124 00:44:40.765932 2869 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62e03290c32b51004114136f1d301c9dc9e3947b23c30003da323f37a0f7793e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-j2nlt" Jan 24 00:44:40.765987 kubelet[2869]: E0124 00:44:40.765958 2869 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62e03290c32b51004114136f1d301c9dc9e3947b23c30003da323f37a0f7793e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-j2nlt" Jan 24 00:44:40.766710 kubelet[2869]: E0124 00:44:40.766020 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-j2nlt_calico-system(0329b08b-e4ed-4b35-88d7-60baae652219)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-j2nlt_calico-system(0329b08b-e4ed-4b35-88d7-60baae652219)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62e03290c32b51004114136f1d301c9dc9e3947b23c30003da323f37a0f7793e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-j2nlt" podUID="0329b08b-e4ed-4b35-88d7-60baae652219" Jan 24 00:44:40.801679 containerd[1607]: time="2026-01-24T00:44:40.801535137Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wp69k,Uid:9bde79db-04de-4de4-a505-446b3c718db4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3c5ebb1711627dcd4a5b29fda17803ef8f07a93846aa6426706a7e0787ba447\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:44:40.803860 kubelet[2869]: E0124 00:44:40.802967 2869 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3c5ebb1711627dcd4a5b29fda17803ef8f07a93846aa6426706a7e0787ba447\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:44:40.803860 kubelet[2869]: E0124 00:44:40.803041 2869 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3c5ebb1711627dcd4a5b29fda17803ef8f07a93846aa6426706a7e0787ba447\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-wp69k" Jan 24 00:44:40.808455 kubelet[2869]: E0124 00:44:40.807051 2869 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3c5ebb1711627dcd4a5b29fda17803ef8f07a93846aa6426706a7e0787ba447\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-wp69k" Jan 24 00:44:40.808723 kubelet[2869]: E0124 00:44:40.808668 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-wp69k_kube-system(9bde79db-04de-4de4-a505-446b3c718db4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-wp69k_kube-system(9bde79db-04de-4de4-a505-446b3c718db4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c3c5ebb1711627dcd4a5b29fda17803ef8f07a93846aa6426706a7e0787ba447\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-wp69k" podUID="9bde79db-04de-4de4-a505-446b3c718db4" Jan 24 00:44:40.835361 containerd[1607]: time="2026-01-24T00:44:40.833605504Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5594cdc7fb-c8l7f,Uid:050a17cf-0e04-46c0-ad64-4ce3987ef3d5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f43d475c1138cfcafae051b6a902b13cd005ad786aee58f369dcde73a1ef299b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:44:40.837631 kubelet[2869]: E0124 00:44:40.837482 2869 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f43d475c1138cfcafae051b6a902b13cd005ad786aee58f369dcde73a1ef299b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:44:40.837964 kubelet[2869]: E0124 00:44:40.837649 2869 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f43d475c1138cfcafae051b6a902b13cd005ad786aee58f369dcde73a1ef299b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5594cdc7fb-c8l7f" Jan 24 00:44:40.837964 kubelet[2869]: E0124 00:44:40.837677 2869 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f43d475c1138cfcafae051b6a902b13cd005ad786aee58f369dcde73a1ef299b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5594cdc7fb-c8l7f" Jan 24 00:44:40.837964 kubelet[2869]: E0124 00:44:40.837720 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5594cdc7fb-c8l7f_calico-apiserver(050a17cf-0e04-46c0-ad64-4ce3987ef3d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5594cdc7fb-c8l7f_calico-apiserver(050a17cf-0e04-46c0-ad64-4ce3987ef3d5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f43d475c1138cfcafae051b6a902b13cd005ad786aee58f369dcde73a1ef299b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5594cdc7fb-c8l7f" podUID="050a17cf-0e04-46c0-ad64-4ce3987ef3d5" Jan 24 00:44:40.852047 containerd[1607]: time="2026-01-24T00:44:40.851591722Z" level=error msg="Failed to destroy network for sandbox \"a8d2ba8e8d69bc745c17b52868540d05fe372f243666b9147cda90fc93f493ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:44:40.855626 containerd[1607]: time="2026-01-24T00:44:40.854362329Z" level=error msg="Failed to destroy network for sandbox \"0d2eb80195398df860532b2badf1e959e20b0945b6ef1d2f53730f17d78f3396\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:44:40.860290 containerd[1607]: time="2026-01-24T00:44:40.859773372Z" level=error msg="Failed to destroy network for sandbox \"0c184c9e41c9802da2566bb070d5885d42fd8c08037f9b5057a613c6754dc404\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:44:40.862677 containerd[1607]: time="2026-01-24T00:44:40.862645795Z" level=error msg="Failed to destroy network for sandbox \"379bcfb8d41b491dfd670e2ae66b3be87bf92b3c28513a356ff5115e85a4e0ae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:44:40.873785 containerd[1607]: time="2026-01-24T00:44:40.873576490Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f7d444f9d-54g8g,Uid:f244c052-aa71-4ccd-aaea-117d2939edf5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8d2ba8e8d69bc745c17b52868540d05fe372f243666b9147cda90fc93f493ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:44:40.888893 kubelet[2869]: E0124 00:44:40.884059 2869 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8d2ba8e8d69bc745c17b52868540d05fe372f243666b9147cda90fc93f493ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:44:40.888893 kubelet[2869]: E0124 00:44:40.886644 2869 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8d2ba8e8d69bc745c17b52868540d05fe372f243666b9147cda90fc93f493ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f7d444f9d-54g8g" Jan 24 00:44:40.888893 kubelet[2869]: E0124 00:44:40.886672 2869 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8d2ba8e8d69bc745c17b52868540d05fe372f243666b9147cda90fc93f493ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f7d444f9d-54g8g" Jan 24 00:44:40.889332 kubelet[2869]: E0124 00:44:40.886807 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5f7d444f9d-54g8g_calico-system(f244c052-aa71-4ccd-aaea-117d2939edf5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5f7d444f9d-54g8g_calico-system(f244c052-aa71-4ccd-aaea-117d2939edf5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a8d2ba8e8d69bc745c17b52868540d05fe372f243666b9147cda90fc93f493ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f7d444f9d-54g8g" podUID="f244c052-aa71-4ccd-aaea-117d2939edf5" Jan 24 00:44:40.892454 containerd[1607]: time="2026-01-24T00:44:40.890534489Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5fcb6bc6-gz28r,Uid:4771eb7f-3eb7-4b12-836b-87b2639f542e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"379bcfb8d41b491dfd670e2ae66b3be87bf92b3c28513a356ff5115e85a4e0ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:44:40.896327 kubelet[2869]: E0124 00:44:40.895381 2869 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"379bcfb8d41b491dfd670e2ae66b3be87bf92b3c28513a356ff5115e85a4e0ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:44:40.896327 kubelet[2869]: E0124 00:44:40.895437 2869 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"379bcfb8d41b491dfd670e2ae66b3be87bf92b3c28513a356ff5115e85a4e0ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5fcb6bc6-gz28r" Jan 24 00:44:40.896327 kubelet[2869]: E0124 00:44:40.895459 2869 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"379bcfb8d41b491dfd670e2ae66b3be87bf92b3c28513a356ff5115e85a4e0ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5fcb6bc6-gz28r" Jan 24 00:44:40.896476 kubelet[2869]: E0124 00:44:40.895514 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5fcb6bc6-gz28r_calico-system(4771eb7f-3eb7-4b12-836b-87b2639f542e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5fcb6bc6-gz28r_calico-system(4771eb7f-3eb7-4b12-836b-87b2639f542e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"379bcfb8d41b491dfd670e2ae66b3be87bf92b3c28513a356ff5115e85a4e0ae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5fcb6bc6-gz28r" podUID="4771eb7f-3eb7-4b12-836b-87b2639f542e" Jan 24 00:44:40.918660 containerd[1607]: time="2026-01-24T00:44:40.917351993Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7zqd4,Uid:7a4cfce0-c870-4a39-b5a8-35bde17d6784,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d2eb80195398df860532b2badf1e959e20b0945b6ef1d2f53730f17d78f3396\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:44:40.918923 kubelet[2869]: E0124 00:44:40.917643 2869 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d2eb80195398df860532b2badf1e959e20b0945b6ef1d2f53730f17d78f3396\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:44:40.918923 kubelet[2869]: E0124 00:44:40.917702 2869 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d2eb80195398df860532b2badf1e959e20b0945b6ef1d2f53730f17d78f3396\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-7zqd4" Jan 24 00:44:40.918923 kubelet[2869]: E0124 00:44:40.917728 2869 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d2eb80195398df860532b2badf1e959e20b0945b6ef1d2f53730f17d78f3396\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-7zqd4" Jan 24 00:44:40.919014 kubelet[2869]: E0124 00:44:40.917871 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-7zqd4_kube-system(7a4cfce0-c870-4a39-b5a8-35bde17d6784)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-7zqd4_kube-system(7a4cfce0-c870-4a39-b5a8-35bde17d6784)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d2eb80195398df860532b2badf1e959e20b0945b6ef1d2f53730f17d78f3396\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-7zqd4" podUID="7a4cfce0-c870-4a39-b5a8-35bde17d6784" Jan 24 00:44:40.923023 containerd[1607]: time="2026-01-24T00:44:40.922791813Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5594cdc7fb-7kk8c,Uid:5a82bd01-5299-411a-9329-279ee1a3e6ef,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c184c9e41c9802da2566bb070d5885d42fd8c08037f9b5057a613c6754dc404\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:44:40.925402 kubelet[2869]: E0124 00:44:40.924604 2869 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c184c9e41c9802da2566bb070d5885d42fd8c08037f9b5057a613c6754dc404\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:44:40.925402 kubelet[2869]: E0124 00:44:40.924655 2869 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c184c9e41c9802da2566bb070d5885d42fd8c08037f9b5057a613c6754dc404\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5594cdc7fb-7kk8c" Jan 24 00:44:40.925402 kubelet[2869]: E0124 00:44:40.924677 2869 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c184c9e41c9802da2566bb070d5885d42fd8c08037f9b5057a613c6754dc404\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5594cdc7fb-7kk8c" Jan 24 00:44:40.925934 kubelet[2869]: E0124 00:44:40.924756 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5594cdc7fb-7kk8c_calico-apiserver(5a82bd01-5299-411a-9329-279ee1a3e6ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5594cdc7fb-7kk8c_calico-apiserver(5a82bd01-5299-411a-9329-279ee1a3e6ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0c184c9e41c9802da2566bb070d5885d42fd8c08037f9b5057a613c6754dc404\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5594cdc7fb-7kk8c" podUID="5a82bd01-5299-411a-9329-279ee1a3e6ef" Jan 24 00:44:41.284430 systemd[1]: run-netns-cni\x2d00534a29\x2dae6b\x2d1ecb\x2da7dd\x2d61ce325ef065.mount: Deactivated successfully. Jan 24 00:44:41.284650 systemd[1]: run-netns-cni\x2df934f192\x2dc1e5\x2d0647\x2dadb1\x2d3152e78e318e.mount: Deactivated successfully. Jan 24 00:44:41.284749 systemd[1]: run-netns-cni\x2d049cf617\x2dd4b5\x2d46b0\x2de8a1\x2d719b067ec4df.mount: Deactivated successfully. Jan 24 00:44:41.284857 systemd[1]: run-netns-cni\x2d0d2a2a8d\x2db8a4\x2d1ba3\x2dfbb9\x2d9f5690e7f79f.mount: Deactivated successfully. Jan 24 00:44:51.797299 containerd[1607]: time="2026-01-24T00:44:51.796979692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5fcb6bc6-gz28r,Uid:4771eb7f-3eb7-4b12-836b-87b2639f542e,Namespace:calico-system,Attempt:0,}" Jan 24 00:44:52.094834 containerd[1607]: time="2026-01-24T00:44:52.094606418Z" level=error msg="Failed to destroy network for sandbox \"4920a02a6042c3c12749c204baf44533937991bcb362cdcf91b7ba0fe2209478\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:44:52.102587 systemd[1]: run-netns-cni\x2d21c30ad1\x2d37a7\x2de9dc\x2d7557\x2d30628edf8258.mount: Deactivated successfully. Jan 24 00:44:52.108441 containerd[1607]: time="2026-01-24T00:44:52.108088740Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5fcb6bc6-gz28r,Uid:4771eb7f-3eb7-4b12-836b-87b2639f542e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4920a02a6042c3c12749c204baf44533937991bcb362cdcf91b7ba0fe2209478\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:44:52.109372 kubelet[2869]: E0124 00:44:52.109039 2869 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4920a02a6042c3c12749c204baf44533937991bcb362cdcf91b7ba0fe2209478\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:44:52.109372 kubelet[2869]: E0124 00:44:52.109334 2869 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4920a02a6042c3c12749c204baf44533937991bcb362cdcf91b7ba0fe2209478\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5fcb6bc6-gz28r" Jan 24 00:44:52.110010 kubelet[2869]: E0124 00:44:52.109499 2869 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4920a02a6042c3c12749c204baf44533937991bcb362cdcf91b7ba0fe2209478\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5fcb6bc6-gz28r" Jan 24 00:44:52.110010 kubelet[2869]: E0124 00:44:52.109570 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5fcb6bc6-gz28r_calico-system(4771eb7f-3eb7-4b12-836b-87b2639f542e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5fcb6bc6-gz28r_calico-system(4771eb7f-3eb7-4b12-836b-87b2639f542e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4920a02a6042c3c12749c204baf44533937991bcb362cdcf91b7ba0fe2209478\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5fcb6bc6-gz28r" podUID="4771eb7f-3eb7-4b12-836b-87b2639f542e" Jan 24 00:44:52.747750 containerd[1607]: time="2026-01-24T00:44:52.747641784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-48xkv,Uid:985a1218-3c37-4f6d-aa83-5ce6fdad91a9,Namespace:calico-system,Attempt:0,}" Jan 24 00:44:52.758453 containerd[1607]: time="2026-01-24T00:44:52.758392277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f7d444f9d-54g8g,Uid:f244c052-aa71-4ccd-aaea-117d2939edf5,Namespace:calico-system,Attempt:0,}" Jan 24 00:44:52.975557 containerd[1607]: time="2026-01-24T00:44:52.974912720Z" level=error msg="Failed to destroy network for sandbox \"1988acd2900039eccf12d5e444859fa268f135e8b96f66c5f2ccd3d4cf263a33\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:44:52.985452 systemd[1]: run-netns-cni\x2dead6111d\x2d7775\x2d3e5a\x2daac3\x2d0af94226e5b9.mount: Deactivated successfully. Jan 24 00:44:53.033885 containerd[1607]: time="2026-01-24T00:44:53.033655751Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-48xkv,Uid:985a1218-3c37-4f6d-aa83-5ce6fdad91a9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1988acd2900039eccf12d5e444859fa268f135e8b96f66c5f2ccd3d4cf263a33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:44:53.037606 kubelet[2869]: E0124 00:44:53.034428 2869 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1988acd2900039eccf12d5e444859fa268f135e8b96f66c5f2ccd3d4cf263a33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:44:53.037606 kubelet[2869]: E0124 00:44:53.034498 2869 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1988acd2900039eccf12d5e444859fa268f135e8b96f66c5f2ccd3d4cf263a33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-48xkv" Jan 24 00:44:53.037606 kubelet[2869]: E0124 00:44:53.034524 2869 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1988acd2900039eccf12d5e444859fa268f135e8b96f66c5f2ccd3d4cf263a33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-48xkv" Jan 24 00:44:53.037815 kubelet[2869]: E0124 00:44:53.034588 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-48xkv_calico-system(985a1218-3c37-4f6d-aa83-5ce6fdad91a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-48xkv_calico-system(985a1218-3c37-4f6d-aa83-5ce6fdad91a9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1988acd2900039eccf12d5e444859fa268f135e8b96f66c5f2ccd3d4cf263a33\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-48xkv" podUID="985a1218-3c37-4f6d-aa83-5ce6fdad91a9" Jan 24 00:44:53.055814 containerd[1607]: time="2026-01-24T00:44:53.055763835Z" level=error msg="Failed to destroy network for sandbox \"d11b8f33f8a126d7c616a596b7c8a1cea5489a40282a452221b3793d22b8e0e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:44:53.061564 systemd[1]: run-netns-cni\x2dfb62023f\x2d10fb\x2d28b3\x2d4aeb\x2daac704fc3874.mount: Deactivated successfully. Jan 24 00:44:53.071303 containerd[1607]: time="2026-01-24T00:44:53.070457923Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f7d444f9d-54g8g,Uid:f244c052-aa71-4ccd-aaea-117d2939edf5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d11b8f33f8a126d7c616a596b7c8a1cea5489a40282a452221b3793d22b8e0e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:44:53.071586 kubelet[2869]: E0124 00:44:53.071306 2869 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d11b8f33f8a126d7c616a596b7c8a1cea5489a40282a452221b3793d22b8e0e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:44:53.071586 kubelet[2869]: E0124 00:44:53.071370 2869 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d11b8f33f8a126d7c616a596b7c8a1cea5489a40282a452221b3793d22b8e0e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f7d444f9d-54g8g" Jan 24 00:44:53.071586 kubelet[2869]: E0124 00:44:53.071398 2869 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d11b8f33f8a126d7c616a596b7c8a1cea5489a40282a452221b3793d22b8e0e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f7d444f9d-54g8g" Jan 24 00:44:53.071722 kubelet[2869]: E0124 00:44:53.071457 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5f7d444f9d-54g8g_calico-system(f244c052-aa71-4ccd-aaea-117d2939edf5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5f7d444f9d-54g8g_calico-system(f244c052-aa71-4ccd-aaea-117d2939edf5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d11b8f33f8a126d7c616a596b7c8a1cea5489a40282a452221b3793d22b8e0e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f7d444f9d-54g8g" podUID="f244c052-aa71-4ccd-aaea-117d2939edf5" Jan 24 00:44:54.806828 containerd[1607]: time="2026-01-24T00:44:54.806773080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5594cdc7fb-7kk8c,Uid:5a82bd01-5299-411a-9329-279ee1a3e6ef,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:44:54.810600 kubelet[2869]: E0124 00:44:54.810571 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:44:54.812570 containerd[1607]: time="2026-01-24T00:44:54.812540252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7zqd4,Uid:7a4cfce0-c870-4a39-b5a8-35bde17d6784,Namespace:kube-system,Attempt:0,}" Jan 24 00:44:54.813302 kubelet[2869]: E0124 00:44:54.813109 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:44:54.814715 containerd[1607]: time="2026-01-24T00:44:54.814687287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wp69k,Uid:9bde79db-04de-4de4-a505-446b3c718db4,Namespace:kube-system,Attempt:0,}" Jan 24 00:44:55.117349 containerd[1607]: time="2026-01-24T00:44:55.116684184Z" level=error msg="Failed to destroy network for sandbox \"8e8584a1c3a9f9879b87431523905bb927d8320bfb02a13d9b407ac7634ecc90\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:44:55.123342 systemd[1]: run-netns-cni\x2da13afeb5\x2d2988\x2df271\x2d7fe3\x2d738706063c14.mount: Deactivated successfully. Jan 24 00:44:55.140361 containerd[1607]: time="2026-01-24T00:44:55.140317322Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5594cdc7fb-7kk8c,Uid:5a82bd01-5299-411a-9329-279ee1a3e6ef,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e8584a1c3a9f9879b87431523905bb927d8320bfb02a13d9b407ac7634ecc90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:44:55.140876 kubelet[2869]: E0124 00:44:55.140836 2869 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e8584a1c3a9f9879b87431523905bb927d8320bfb02a13d9b407ac7634ecc90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:44:55.141032 kubelet[2869]: E0124 00:44:55.141000 2869 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e8584a1c3a9f9879b87431523905bb927d8320bfb02a13d9b407ac7634ecc90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5594cdc7fb-7kk8c" Jan 24 00:44:55.141315 kubelet[2869]: E0124 00:44:55.141112 2869 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e8584a1c3a9f9879b87431523905bb927d8320bfb02a13d9b407ac7634ecc90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5594cdc7fb-7kk8c" Jan 24 00:44:55.141606 kubelet[2869]: E0124 00:44:55.141564 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5594cdc7fb-7kk8c_calico-apiserver(5a82bd01-5299-411a-9329-279ee1a3e6ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5594cdc7fb-7kk8c_calico-apiserver(5a82bd01-5299-411a-9329-279ee1a3e6ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e8584a1c3a9f9879b87431523905bb927d8320bfb02a13d9b407ac7634ecc90\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5594cdc7fb-7kk8c" podUID="5a82bd01-5299-411a-9329-279ee1a3e6ef" Jan 24 00:44:55.188461 containerd[1607]: time="2026-01-24T00:44:55.188406761Z" level=error msg="Failed to destroy network for sandbox \"886ecc6687eb6ba245e44163d9fbf4e79c7187d2806cf44a69732f741d8edcf8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:44:55.191876 systemd[1]: run-netns-cni\x2d9b69afa6\x2d11e9\x2d487c\x2dbb4c\x2d6efeb996a760.mount: Deactivated successfully. Jan 24 00:44:55.205305 containerd[1607]: time="2026-01-24T00:44:55.205098873Z" level=error msg="Failed to destroy network for sandbox \"769a659dc49a64abb56f030e355ade7260b202f5616af83d6ceb1e58a2d05c2f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:44:55.207662 containerd[1607]: time="2026-01-24T00:44:55.205490641Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wp69k,Uid:9bde79db-04de-4de4-a505-446b3c718db4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"886ecc6687eb6ba245e44163d9fbf4e79c7187d2806cf44a69732f741d8edcf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:44:55.208366 kubelet[2869]: E0124 00:44:55.208314 2869 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"886ecc6687eb6ba245e44163d9fbf4e79c7187d2806cf44a69732f741d8edcf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:44:55.208448 kubelet[2869]: E0124 00:44:55.208377 2869 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"886ecc6687eb6ba245e44163d9fbf4e79c7187d2806cf44a69732f741d8edcf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-wp69k" Jan 24 00:44:55.208448 kubelet[2869]: E0124 00:44:55.208405 2869 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"886ecc6687eb6ba245e44163d9fbf4e79c7187d2806cf44a69732f741d8edcf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-wp69k" Jan 24 00:44:55.209653 kubelet[2869]: E0124 00:44:55.209537 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-wp69k_kube-system(9bde79db-04de-4de4-a505-446b3c718db4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-wp69k_kube-system(9bde79db-04de-4de4-a505-446b3c718db4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"886ecc6687eb6ba245e44163d9fbf4e79c7187d2806cf44a69732f741d8edcf8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-wp69k" podUID="9bde79db-04de-4de4-a505-446b3c718db4" Jan 24 00:44:55.233453 containerd[1607]: time="2026-01-24T00:44:55.233397141Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7zqd4,Uid:7a4cfce0-c870-4a39-b5a8-35bde17d6784,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"769a659dc49a64abb56f030e355ade7260b202f5616af83d6ceb1e58a2d05c2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:44:55.234018 kubelet[2869]: E0124 00:44:55.233977 2869 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"769a659dc49a64abb56f030e355ade7260b202f5616af83d6ceb1e58a2d05c2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:44:55.235979 kubelet[2869]: E0124 00:44:55.235949 2869 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"769a659dc49a64abb56f030e355ade7260b202f5616af83d6ceb1e58a2d05c2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-7zqd4" Jan 24 00:44:55.236105 kubelet[2869]: E0124 00:44:55.236081 2869 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"769a659dc49a64abb56f030e355ade7260b202f5616af83d6ceb1e58a2d05c2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-7zqd4" Jan 24 00:44:55.236709 kubelet[2869]: E0124 00:44:55.236673 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-7zqd4_kube-system(7a4cfce0-c870-4a39-b5a8-35bde17d6784)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-7zqd4_kube-system(7a4cfce0-c870-4a39-b5a8-35bde17d6784)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"769a659dc49a64abb56f030e355ade7260b202f5616af83d6ceb1e58a2d05c2f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-7zqd4" podUID="7a4cfce0-c870-4a39-b5a8-35bde17d6784" Jan 24 00:44:55.758630 containerd[1607]: time="2026-01-24T00:44:55.758421947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5594cdc7fb-c8l7f,Uid:050a17cf-0e04-46c0-ad64-4ce3987ef3d5,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:44:55.777421 containerd[1607]: time="2026-01-24T00:44:55.775644091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-j2nlt,Uid:0329b08b-e4ed-4b35-88d7-60baae652219,Namespace:calico-system,Attempt:0,}" Jan 24 00:44:55.829828 systemd[1]: run-netns-cni\x2d7b374369\x2d1f9f\x2d6141\x2df990\x2d04d2f41c7e43.mount: Deactivated successfully. Jan 24 00:44:56.144845 containerd[1607]: time="2026-01-24T00:44:56.144780554Z" level=error msg="Failed to destroy network for sandbox \"1e10bcf20bc642a96b011f4608143770394454c9278721cb58942d04d63c35c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:44:56.150479 systemd[1]: run-netns-cni\x2d2022886a\x2df644\x2d9e4b\x2d7025\x2dd18e8177916b.mount: Deactivated successfully. Jan 24 00:44:56.157439 containerd[1607]: time="2026-01-24T00:44:56.155735014Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-j2nlt,Uid:0329b08b-e4ed-4b35-88d7-60baae652219,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e10bcf20bc642a96b011f4608143770394454c9278721cb58942d04d63c35c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:44:56.157638 kubelet[2869]: E0124 00:44:56.156039 2869 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e10bcf20bc642a96b011f4608143770394454c9278721cb58942d04d63c35c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:44:56.157638 kubelet[2869]: E0124 00:44:56.156099 2869 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e10bcf20bc642a96b011f4608143770394454c9278721cb58942d04d63c35c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-j2nlt" Jan 24 00:44:56.157638 kubelet[2869]: E0124 00:44:56.156275 2869 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e10bcf20bc642a96b011f4608143770394454c9278721cb58942d04d63c35c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-j2nlt" Jan 24 00:44:56.158047 kubelet[2869]: E0124 00:44:56.156404 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-j2nlt_calico-system(0329b08b-e4ed-4b35-88d7-60baae652219)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-j2nlt_calico-system(0329b08b-e4ed-4b35-88d7-60baae652219)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1e10bcf20bc642a96b011f4608143770394454c9278721cb58942d04d63c35c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-j2nlt" podUID="0329b08b-e4ed-4b35-88d7-60baae652219" Jan 24 00:44:56.201691 containerd[1607]: time="2026-01-24T00:44:56.201486241Z" level=error msg="Failed to destroy network for sandbox \"5a613ff37eea844243f47663b42cae6b5c7a88488ec081e0c9620f32c2ef3509\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:44:56.207980 systemd[1]: run-netns-cni\x2dac98cd5e\x2d63ea\x2dfe66\x2d63ab\x2d3ea3d27e9ecd.mount: Deactivated successfully. Jan 24 00:44:56.216817 containerd[1607]: time="2026-01-24T00:44:56.216644445Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5594cdc7fb-c8l7f,Uid:050a17cf-0e04-46c0-ad64-4ce3987ef3d5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a613ff37eea844243f47663b42cae6b5c7a88488ec081e0c9620f32c2ef3509\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:44:56.217629 kubelet[2869]: E0124 00:44:56.217091 2869 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a613ff37eea844243f47663b42cae6b5c7a88488ec081e0c9620f32c2ef3509\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:44:56.217629 kubelet[2869]: E0124 00:44:56.217357 2869 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a613ff37eea844243f47663b42cae6b5c7a88488ec081e0c9620f32c2ef3509\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5594cdc7fb-c8l7f" Jan 24 00:44:56.217629 kubelet[2869]: E0124 00:44:56.217386 2869 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a613ff37eea844243f47663b42cae6b5c7a88488ec081e0c9620f32c2ef3509\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5594cdc7fb-c8l7f" Jan 24 00:44:56.217792 kubelet[2869]: E0124 00:44:56.217446 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5594cdc7fb-c8l7f_calico-apiserver(050a17cf-0e04-46c0-ad64-4ce3987ef3d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5594cdc7fb-c8l7f_calico-apiserver(050a17cf-0e04-46c0-ad64-4ce3987ef3d5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5a613ff37eea844243f47663b42cae6b5c7a88488ec081e0c9620f32c2ef3509\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5594cdc7fb-c8l7f" podUID="050a17cf-0e04-46c0-ad64-4ce3987ef3d5" Jan 24 00:45:01.748357 kubelet[2869]: E0124 00:45:01.747944 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:45:03.745113 kubelet[2869]: E0124 00:45:03.740520 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:45:04.765375 containerd[1607]: time="2026-01-24T00:45:04.764919329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f7d444f9d-54g8g,Uid:f244c052-aa71-4ccd-aaea-117d2939edf5,Namespace:calico-system,Attempt:0,}" Jan 24 00:45:05.095619 containerd[1607]: time="2026-01-24T00:45:05.094784434Z" level=error msg="Failed to destroy network for sandbox \"cf30ab3677303d1aeefdf402c83aae14189d12be998eac74768ecbc163a75de4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:05.101007 systemd[1]: run-netns-cni\x2d90cae260\x2d94df\x2dec62\x2d1507\x2d19d3a4275c56.mount: Deactivated successfully. Jan 24 00:45:05.118063 containerd[1607]: time="2026-01-24T00:45:05.115542410Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f7d444f9d-54g8g,Uid:f244c052-aa71-4ccd-aaea-117d2939edf5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf30ab3677303d1aeefdf402c83aae14189d12be998eac74768ecbc163a75de4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:05.118471 kubelet[2869]: E0124 00:45:05.116495 2869 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf30ab3677303d1aeefdf402c83aae14189d12be998eac74768ecbc163a75de4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:05.118471 kubelet[2869]: E0124 00:45:05.116753 2869 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf30ab3677303d1aeefdf402c83aae14189d12be998eac74768ecbc163a75de4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f7d444f9d-54g8g" Jan 24 00:45:05.118471 kubelet[2869]: E0124 00:45:05.116772 2869 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf30ab3677303d1aeefdf402c83aae14189d12be998eac74768ecbc163a75de4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f7d444f9d-54g8g" Jan 24 00:45:05.118926 kubelet[2869]: E0124 00:45:05.117021 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5f7d444f9d-54g8g_calico-system(f244c052-aa71-4ccd-aaea-117d2939edf5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5f7d444f9d-54g8g_calico-system(f244c052-aa71-4ccd-aaea-117d2939edf5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf30ab3677303d1aeefdf402c83aae14189d12be998eac74768ecbc163a75de4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f7d444f9d-54g8g" podUID="f244c052-aa71-4ccd-aaea-117d2939edf5" Jan 24 00:45:05.282794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3031824075.mount: Deactivated successfully. Jan 24 00:45:05.370389 containerd[1607]: time="2026-01-24T00:45:05.369993405Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:45:05.413743 containerd[1607]: time="2026-01-24T00:45:05.382077682Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 24 00:45:05.413743 containerd[1607]: time="2026-01-24T00:45:05.392916968Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:45:05.413743 containerd[1607]: time="2026-01-24T00:45:05.400633046Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 25.705250955s" Jan 24 00:45:05.413743 containerd[1607]: time="2026-01-24T00:45:05.412692540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 24 00:45:05.417358 containerd[1607]: time="2026-01-24T00:45:05.416775822Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:45:05.487322 containerd[1607]: time="2026-01-24T00:45:05.486975724Z" level=info msg="CreateContainer within sandbox \"baf8147eecc32e6465a2b4641b51a47fa3334968e631692742403f9950914415\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 24 00:45:05.560882 containerd[1607]: time="2026-01-24T00:45:05.560663398Z" level=info msg="Container 5eb9e96e2dfa758ee20cf91c49796815add733e4fcfe410f1de2a49e360d6af9: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:45:05.613298 containerd[1607]: time="2026-01-24T00:45:05.612432628Z" level=info msg="CreateContainer within sandbox \"baf8147eecc32e6465a2b4641b51a47fa3334968e631692742403f9950914415\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5eb9e96e2dfa758ee20cf91c49796815add733e4fcfe410f1de2a49e360d6af9\"" Jan 24 00:45:05.615744 containerd[1607]: time="2026-01-24T00:45:05.615714142Z" level=info msg="StartContainer for \"5eb9e96e2dfa758ee20cf91c49796815add733e4fcfe410f1de2a49e360d6af9\"" Jan 24 00:45:05.622522 containerd[1607]: time="2026-01-24T00:45:05.621963512Z" level=info msg="connecting to shim 5eb9e96e2dfa758ee20cf91c49796815add733e4fcfe410f1de2a49e360d6af9" address="unix:///run/containerd/s/2e9c9d3c1dc5d19c1497c3203c4a7462c3e84feeb0ee2c30bb2ffb37ae4c3e59" protocol=ttrpc version=3 Jan 24 00:45:05.711629 systemd[1]: Started cri-containerd-5eb9e96e2dfa758ee20cf91c49796815add733e4fcfe410f1de2a49e360d6af9.scope - libcontainer container 5eb9e96e2dfa758ee20cf91c49796815add733e4fcfe410f1de2a49e360d6af9. Jan 24 00:45:05.759507 containerd[1607]: time="2026-01-24T00:45:05.758843545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5fcb6bc6-gz28r,Uid:4771eb7f-3eb7-4b12-836b-87b2639f542e,Namespace:calico-system,Attempt:0,}" Jan 24 00:45:06.009669 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 24 00:45:06.009844 kernel: audit: type=1334 audit(1769215505.995:579): prog-id=172 op=LOAD Jan 24 00:45:05.995000 audit: BPF prog-id=172 op=LOAD Jan 24 00:45:05.995000 audit[4297]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3417 pid=4297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:06.061358 kernel: audit: type=1300 audit(1769215505.995:579): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3417 pid=4297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:05.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565623965393665326466613735386565323063663931633439373936 Jan 24 00:45:06.090525 kernel: audit: type=1327 audit(1769215505.995:579): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565623965393665326466613735386565323063663931633439373936 Jan 24 00:45:06.090638 kernel: audit: type=1334 audit(1769215505.995:580): prog-id=173 op=LOAD Jan 24 00:45:05.995000 audit: BPF prog-id=173 op=LOAD Jan 24 00:45:05.995000 audit[4297]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3417 pid=4297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:06.137006 containerd[1607]: time="2026-01-24T00:45:06.121378926Z" level=error msg="Failed to destroy network for sandbox \"decd9555abf496082526cd1072cfd6d82d995c5e1c3551c081e240eb28a60dc4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:06.146649 kernel: audit: type=1300 audit(1769215505.995:580): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3417 pid=4297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:06.144529 systemd[1]: run-netns-cni\x2def2c84ea\x2d2a12\x2db675\x2d29b0\x2df3155c36e3e3.mount: Deactivated successfully. Jan 24 00:45:05.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565623965393665326466613735386565323063663931633439373936 Jan 24 00:45:05.995000 audit: BPF prog-id=173 op=UNLOAD Jan 24 00:45:06.186628 containerd[1607]: time="2026-01-24T00:45:06.170698729Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5fcb6bc6-gz28r,Uid:4771eb7f-3eb7-4b12-836b-87b2639f542e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"decd9555abf496082526cd1072cfd6d82d995c5e1c3551c081e240eb28a60dc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:06.186837 kubelet[2869]: E0124 00:45:06.171396 2869 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"decd9555abf496082526cd1072cfd6d82d995c5e1c3551c081e240eb28a60dc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:06.186837 kubelet[2869]: E0124 00:45:06.171448 2869 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"decd9555abf496082526cd1072cfd6d82d995c5e1c3551c081e240eb28a60dc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5fcb6bc6-gz28r" Jan 24 00:45:06.186837 kubelet[2869]: E0124 00:45:06.171472 2869 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"decd9555abf496082526cd1072cfd6d82d995c5e1c3551c081e240eb28a60dc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5fcb6bc6-gz28r" Jan 24 00:45:06.188071 kubelet[2869]: E0124 00:45:06.171532 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5fcb6bc6-gz28r_calico-system(4771eb7f-3eb7-4b12-836b-87b2639f542e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5fcb6bc6-gz28r_calico-system(4771eb7f-3eb7-4b12-836b-87b2639f542e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"decd9555abf496082526cd1072cfd6d82d995c5e1c3551c081e240eb28a60dc4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5fcb6bc6-gz28r" podUID="4771eb7f-3eb7-4b12-836b-87b2639f542e" Jan 24 00:45:06.193965 kernel: audit: type=1327 audit(1769215505.995:580): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565623965393665326466613735386565323063663931633439373936 Jan 24 00:45:06.194102 kernel: audit: type=1334 audit(1769215505.995:581): prog-id=173 op=UNLOAD Jan 24 00:45:06.194359 kernel: audit: type=1300 audit(1769215505.995:581): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3417 pid=4297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:05.995000 audit[4297]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3417 pid=4297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:05.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565623965393665326466613735386565323063663931633439373936 Jan 24 00:45:06.279392 kernel: audit: type=1327 audit(1769215505.995:581): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565623965393665326466613735386565323063663931633439373936 Jan 24 00:45:06.279518 kernel: audit: type=1334 audit(1769215505.995:582): prog-id=172 op=UNLOAD Jan 24 00:45:05.995000 audit: BPF prog-id=172 op=UNLOAD Jan 24 00:45:05.995000 audit[4297]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3417 pid=4297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:05.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565623965393665326466613735386565323063663931633439373936 Jan 24 00:45:05.995000 audit: BPF prog-id=174 op=LOAD Jan 24 00:45:05.995000 audit[4297]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3417 pid=4297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:05.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565623965393665326466613735386565323063663931633439373936 Jan 24 00:45:06.326770 containerd[1607]: time="2026-01-24T00:45:06.325971752Z" level=info msg="StartContainer for \"5eb9e96e2dfa758ee20cf91c49796815add733e4fcfe410f1de2a49e360d6af9\" returns successfully" Jan 24 00:45:06.747583 containerd[1607]: time="2026-01-24T00:45:06.747394716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-48xkv,Uid:985a1218-3c37-4f6d-aa83-5ce6fdad91a9,Namespace:calico-system,Attempt:0,}" Jan 24 00:45:06.890448 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 24 00:45:06.890679 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 24 00:45:07.073342 containerd[1607]: time="2026-01-24T00:45:07.066356524Z" level=error msg="Failed to destroy network for sandbox \"525c770851cff198287b1f23881827b4d50b380e56f4fe13fa18298575e853db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:07.076055 systemd[1]: run-netns-cni\x2d72e4049e\x2d66c5\x2ded31\x2d8758\x2d57f8ebe50b5d.mount: Deactivated successfully. Jan 24 00:45:07.096493 containerd[1607]: time="2026-01-24T00:45:07.096441796Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-48xkv,Uid:985a1218-3c37-4f6d-aa83-5ce6fdad91a9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"525c770851cff198287b1f23881827b4d50b380e56f4fe13fa18298575e853db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:07.098677 kubelet[2869]: E0124 00:45:07.097962 2869 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"525c770851cff198287b1f23881827b4d50b380e56f4fe13fa18298575e853db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:07.098677 kubelet[2869]: E0124 00:45:07.098077 2869 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"525c770851cff198287b1f23881827b4d50b380e56f4fe13fa18298575e853db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-48xkv" Jan 24 00:45:07.098677 kubelet[2869]: E0124 00:45:07.098106 2869 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"525c770851cff198287b1f23881827b4d50b380e56f4fe13fa18298575e853db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-48xkv" Jan 24 00:45:07.099468 kubelet[2869]: E0124 00:45:07.099385 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-48xkv_calico-system(985a1218-3c37-4f6d-aa83-5ce6fdad91a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-48xkv_calico-system(985a1218-3c37-4f6d-aa83-5ce6fdad91a9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"525c770851cff198287b1f23881827b4d50b380e56f4fe13fa18298575e853db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-48xkv" podUID="985a1218-3c37-4f6d-aa83-5ce6fdad91a9" Jan 24 00:45:07.233441 kubelet[2869]: E0124 00:45:07.232543 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:45:07.476898 kubelet[2869]: I0124 00:45:07.476766 2869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-lvg2n" podStartSLOduration=4.019216495 podStartE2EDuration="52.476746909s" podCreationTimestamp="2026-01-24 00:44:15 +0000 UTC" firstStartedPulling="2026-01-24 00:44:16.962441382 +0000 UTC m=+43.681741598" lastFinishedPulling="2026-01-24 00:45:05.419971796 +0000 UTC m=+92.139272012" observedRunningTime="2026-01-24 00:45:07.33640611 +0000 UTC m=+94.055706346" watchObservedRunningTime="2026-01-24 00:45:07.476746909 +0000 UTC m=+94.196047125" Jan 24 00:45:07.628025 kubelet[2869]: I0124 00:45:07.624616 2869 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4771eb7f-3eb7-4b12-836b-87b2639f542e-whisker-backend-key-pair\") pod \"4771eb7f-3eb7-4b12-836b-87b2639f542e\" (UID: \"4771eb7f-3eb7-4b12-836b-87b2639f542e\") " Jan 24 00:45:07.628025 kubelet[2869]: I0124 00:45:07.624676 2869 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55k5q\" (UniqueName: \"kubernetes.io/projected/4771eb7f-3eb7-4b12-836b-87b2639f542e-kube-api-access-55k5q\") pod \"4771eb7f-3eb7-4b12-836b-87b2639f542e\" (UID: \"4771eb7f-3eb7-4b12-836b-87b2639f542e\") " Jan 24 00:45:07.628025 kubelet[2869]: I0124 00:45:07.624726 2869 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4771eb7f-3eb7-4b12-836b-87b2639f542e-whisker-ca-bundle\") pod \"4771eb7f-3eb7-4b12-836b-87b2639f542e\" (UID: \"4771eb7f-3eb7-4b12-836b-87b2639f542e\") " Jan 24 00:45:07.634868 kubelet[2869]: I0124 00:45:07.634647 2869 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4771eb7f-3eb7-4b12-836b-87b2639f542e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "4771eb7f-3eb7-4b12-836b-87b2639f542e" (UID: "4771eb7f-3eb7-4b12-836b-87b2639f542e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 24 00:45:07.662125 systemd[1]: var-lib-kubelet-pods-4771eb7f\x2d3eb7\x2d4b12\x2d836b\x2d87b2639f542e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d55k5q.mount: Deactivated successfully. Jan 24 00:45:07.663620 kubelet[2869]: I0124 00:45:07.662911 2869 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4771eb7f-3eb7-4b12-836b-87b2639f542e-kube-api-access-55k5q" (OuterVolumeSpecName: "kube-api-access-55k5q") pod "4771eb7f-3eb7-4b12-836b-87b2639f542e" (UID: "4771eb7f-3eb7-4b12-836b-87b2639f542e"). InnerVolumeSpecName "kube-api-access-55k5q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 00:45:07.671325 kubelet[2869]: I0124 00:45:07.670711 2869 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4771eb7f-3eb7-4b12-836b-87b2639f542e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "4771eb7f-3eb7-4b12-836b-87b2639f542e" (UID: "4771eb7f-3eb7-4b12-836b-87b2639f542e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 24 00:45:07.671682 systemd[1]: var-lib-kubelet-pods-4771eb7f\x2d3eb7\x2d4b12\x2d836b\x2d87b2639f542e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 24 00:45:07.736129 kubelet[2869]: I0124 00:45:07.735908 2869 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4771eb7f-3eb7-4b12-836b-87b2639f542e-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 24 00:45:07.736129 kubelet[2869]: I0124 00:45:07.737028 2869 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4771eb7f-3eb7-4b12-836b-87b2639f542e-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 24 00:45:07.736129 kubelet[2869]: I0124 00:45:07.737058 2869 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-55k5q\" (UniqueName: \"kubernetes.io/projected/4771eb7f-3eb7-4b12-836b-87b2639f542e-kube-api-access-55k5q\") on node \"localhost\" DevicePath \"\"" Jan 24 00:45:07.766937 containerd[1607]: time="2026-01-24T00:45:07.765102286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5594cdc7fb-c8l7f,Uid:050a17cf-0e04-46c0-ad64-4ce3987ef3d5,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:45:07.772951 kubelet[2869]: E0124 00:45:07.771580 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:45:07.776344 containerd[1607]: time="2026-01-24T00:45:07.775416139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wp69k,Uid:9bde79db-04de-4de4-a505-446b3c718db4,Namespace:kube-system,Attempt:0,}" Jan 24 00:45:07.850924 systemd[1]: Removed slice kubepods-besteffort-pod4771eb7f_3eb7_4b12_836b_87b2639f542e.slice - libcontainer container kubepods-besteffort-pod4771eb7f_3eb7_4b12_836b_87b2639f542e.slice. Jan 24 00:45:08.223127 kubelet[2869]: E0124 00:45:08.223059 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:45:08.751473 kubelet[2869]: E0124 00:45:08.751060 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:45:08.754601 containerd[1607]: time="2026-01-24T00:45:08.753546328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7zqd4,Uid:7a4cfce0-c870-4a39-b5a8-35bde17d6784,Namespace:kube-system,Attempt:0,}" Jan 24 00:45:08.777001 systemd[1]: Created slice kubepods-besteffort-pode041bbba_486b_4bf8_b212_ca4fbb2d4a57.slice - libcontainer container kubepods-besteffort-pode041bbba_486b_4bf8_b212_ca4fbb2d4a57.slice. Jan 24 00:45:08.868398 kubelet[2869]: I0124 00:45:08.867439 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e041bbba-486b-4bf8-b212-ca4fbb2d4a57-whisker-ca-bundle\") pod \"whisker-5f8f47959d-9fk7m\" (UID: \"e041bbba-486b-4bf8-b212-ca4fbb2d4a57\") " pod="calico-system/whisker-5f8f47959d-9fk7m" Jan 24 00:45:08.868398 kubelet[2869]: I0124 00:45:08.867491 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85sjq\" (UniqueName: \"kubernetes.io/projected/e041bbba-486b-4bf8-b212-ca4fbb2d4a57-kube-api-access-85sjq\") pod \"whisker-5f8f47959d-9fk7m\" (UID: \"e041bbba-486b-4bf8-b212-ca4fbb2d4a57\") " pod="calico-system/whisker-5f8f47959d-9fk7m" Jan 24 00:45:08.868398 kubelet[2869]: I0124 00:45:08.867555 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e041bbba-486b-4bf8-b212-ca4fbb2d4a57-whisker-backend-key-pair\") pod \"whisker-5f8f47959d-9fk7m\" (UID: \"e041bbba-486b-4bf8-b212-ca4fbb2d4a57\") " pod="calico-system/whisker-5f8f47959d-9fk7m" Jan 24 00:45:09.116499 systemd-networkd[1505]: calib6c5b11d811: Link UP Jan 24 00:45:09.116866 systemd-networkd[1505]: calib6c5b11d811: Gained carrier Jan 24 00:45:09.150900 containerd[1607]: time="2026-01-24T00:45:09.149669706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f8f47959d-9fk7m,Uid:e041bbba-486b-4bf8-b212-ca4fbb2d4a57,Namespace:calico-system,Attempt:0,}" Jan 24 00:45:09.200348 containerd[1607]: 2026-01-24 00:45:08.021 [INFO][4436] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:45:09.200348 containerd[1607]: 2026-01-24 00:45:08.145 [INFO][4436] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5594cdc7fb--c8l7f-eth0 calico-apiserver-5594cdc7fb- calico-apiserver 050a17cf-0e04-46c0-ad64-4ce3987ef3d5 958 0 2026-01-24 00:44:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5594cdc7fb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5594cdc7fb-c8l7f eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib6c5b11d811 [] [] }} ContainerID="d34dea9aace495ae5d4fe014e15a49e3a4942eb0965f421a321b8659cea52d78" Namespace="calico-apiserver" Pod="calico-apiserver-5594cdc7fb-c8l7f" WorkloadEndpoint="localhost-k8s-calico--apiserver--5594cdc7fb--c8l7f-" Jan 24 00:45:09.200348 containerd[1607]: 2026-01-24 00:45:08.147 [INFO][4436] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d34dea9aace495ae5d4fe014e15a49e3a4942eb0965f421a321b8659cea52d78" Namespace="calico-apiserver" Pod="calico-apiserver-5594cdc7fb-c8l7f" WorkloadEndpoint="localhost-k8s-calico--apiserver--5594cdc7fb--c8l7f-eth0" Jan 24 00:45:09.200348 containerd[1607]: 2026-01-24 00:45:08.652 [INFO][4470] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d34dea9aace495ae5d4fe014e15a49e3a4942eb0965f421a321b8659cea52d78" HandleID="k8s-pod-network.d34dea9aace495ae5d4fe014e15a49e3a4942eb0965f421a321b8659cea52d78" Workload="localhost-k8s-calico--apiserver--5594cdc7fb--c8l7f-eth0" Jan 24 00:45:09.200768 containerd[1607]: 2026-01-24 00:45:08.653 [INFO][4470] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d34dea9aace495ae5d4fe014e15a49e3a4942eb0965f421a321b8659cea52d78" HandleID="k8s-pod-network.d34dea9aace495ae5d4fe014e15a49e3a4942eb0965f421a321b8659cea52d78" Workload="localhost-k8s-calico--apiserver--5594cdc7fb--c8l7f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f77f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5594cdc7fb-c8l7f", "timestamp":"2026-01-24 00:45:08.652346791 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:45:09.200768 containerd[1607]: 2026-01-24 00:45:08.653 [INFO][4470] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:45:09.200768 containerd[1607]: 2026-01-24 00:45:08.653 [INFO][4470] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:45:09.200768 containerd[1607]: 2026-01-24 00:45:08.660 [INFO][4470] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:45:09.200768 containerd[1607]: 2026-01-24 00:45:08.832 [INFO][4470] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d34dea9aace495ae5d4fe014e15a49e3a4942eb0965f421a321b8659cea52d78" host="localhost" Jan 24 00:45:09.200768 containerd[1607]: 2026-01-24 00:45:08.889 [INFO][4470] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:45:09.200768 containerd[1607]: 2026-01-24 00:45:08.922 [INFO][4470] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:45:09.200768 containerd[1607]: 2026-01-24 00:45:08.943 [INFO][4470] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:45:09.200768 containerd[1607]: 2026-01-24 00:45:08.967 [INFO][4470] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:45:09.200768 containerd[1607]: 2026-01-24 00:45:08.968 [INFO][4470] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d34dea9aace495ae5d4fe014e15a49e3a4942eb0965f421a321b8659cea52d78" host="localhost" Jan 24 00:45:09.207377 containerd[1607]: 2026-01-24 00:45:08.994 [INFO][4470] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d34dea9aace495ae5d4fe014e15a49e3a4942eb0965f421a321b8659cea52d78 Jan 24 00:45:09.207377 containerd[1607]: 2026-01-24 00:45:09.018 [INFO][4470] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d34dea9aace495ae5d4fe014e15a49e3a4942eb0965f421a321b8659cea52d78" host="localhost" Jan 24 00:45:09.207377 containerd[1607]: 2026-01-24 00:45:09.040 [INFO][4470] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.d34dea9aace495ae5d4fe014e15a49e3a4942eb0965f421a321b8659cea52d78" host="localhost" Jan 24 00:45:09.207377 containerd[1607]: 2026-01-24 00:45:09.042 [INFO][4470] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.d34dea9aace495ae5d4fe014e15a49e3a4942eb0965f421a321b8659cea52d78" host="localhost" Jan 24 00:45:09.207377 containerd[1607]: 2026-01-24 00:45:09.046 [INFO][4470] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:45:09.207377 containerd[1607]: 2026-01-24 00:45:09.046 [INFO][4470] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="d34dea9aace495ae5d4fe014e15a49e3a4942eb0965f421a321b8659cea52d78" HandleID="k8s-pod-network.d34dea9aace495ae5d4fe014e15a49e3a4942eb0965f421a321b8659cea52d78" Workload="localhost-k8s-calico--apiserver--5594cdc7fb--c8l7f-eth0" Jan 24 00:45:09.207567 containerd[1607]: 2026-01-24 00:45:09.067 [INFO][4436] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d34dea9aace495ae5d4fe014e15a49e3a4942eb0965f421a321b8659cea52d78" Namespace="calico-apiserver" Pod="calico-apiserver-5594cdc7fb-c8l7f" WorkloadEndpoint="localhost-k8s-calico--apiserver--5594cdc7fb--c8l7f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5594cdc7fb--c8l7f-eth0", GenerateName:"calico-apiserver-5594cdc7fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"050a17cf-0e04-46c0-ad64-4ce3987ef3d5", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 44, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5594cdc7fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5594cdc7fb-c8l7f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib6c5b11d811", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:45:09.207735 containerd[1607]: 2026-01-24 00:45:09.067 [INFO][4436] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="d34dea9aace495ae5d4fe014e15a49e3a4942eb0965f421a321b8659cea52d78" Namespace="calico-apiserver" Pod="calico-apiserver-5594cdc7fb-c8l7f" WorkloadEndpoint="localhost-k8s-calico--apiserver--5594cdc7fb--c8l7f-eth0" Jan 24 00:45:09.207735 containerd[1607]: 2026-01-24 00:45:09.068 [INFO][4436] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib6c5b11d811 ContainerID="d34dea9aace495ae5d4fe014e15a49e3a4942eb0965f421a321b8659cea52d78" Namespace="calico-apiserver" Pod="calico-apiserver-5594cdc7fb-c8l7f" WorkloadEndpoint="localhost-k8s-calico--apiserver--5594cdc7fb--c8l7f-eth0" Jan 24 00:45:09.207735 containerd[1607]: 2026-01-24 00:45:09.121 [INFO][4436] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d34dea9aace495ae5d4fe014e15a49e3a4942eb0965f421a321b8659cea52d78" Namespace="calico-apiserver" Pod="calico-apiserver-5594cdc7fb-c8l7f" WorkloadEndpoint="localhost-k8s-calico--apiserver--5594cdc7fb--c8l7f-eth0" Jan 24 00:45:09.207827 containerd[1607]: 2026-01-24 00:45:09.124 [INFO][4436] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d34dea9aace495ae5d4fe014e15a49e3a4942eb0965f421a321b8659cea52d78" Namespace="calico-apiserver" Pod="calico-apiserver-5594cdc7fb-c8l7f" WorkloadEndpoint="localhost-k8s-calico--apiserver--5594cdc7fb--c8l7f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5594cdc7fb--c8l7f-eth0", GenerateName:"calico-apiserver-5594cdc7fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"050a17cf-0e04-46c0-ad64-4ce3987ef3d5", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 44, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5594cdc7fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d34dea9aace495ae5d4fe014e15a49e3a4942eb0965f421a321b8659cea52d78", Pod:"calico-apiserver-5594cdc7fb-c8l7f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib6c5b11d811", MAC:"76:39:42:9a:cf:0f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:45:09.207989 containerd[1607]: 2026-01-24 00:45:09.182 [INFO][4436] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d34dea9aace495ae5d4fe014e15a49e3a4942eb0965f421a321b8659cea52d78" Namespace="calico-apiserver" Pod="calico-apiserver-5594cdc7fb-c8l7f" WorkloadEndpoint="localhost-k8s-calico--apiserver--5594cdc7fb--c8l7f-eth0" Jan 24 00:45:09.374006 systemd-networkd[1505]: cali73c05912e5f: Link UP Jan 24 00:45:09.374765 systemd-networkd[1505]: cali73c05912e5f: Gained carrier Jan 24 00:45:09.457328 containerd[1607]: 2026-01-24 00:45:07.967 [INFO][4455] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:45:09.457328 containerd[1607]: 2026-01-24 00:45:08.156 [INFO][4455] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--wp69k-eth0 coredns-66bc5c9577- kube-system 9bde79db-04de-4de4-a505-446b3c718db4 962 0 2026-01-24 00:43:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-wp69k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali73c05912e5f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="2bfecc2ff1a6949dc518dd4a00471153de419120bb0e20e556da1b0a195bf993" Namespace="kube-system" Pod="coredns-66bc5c9577-wp69k" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wp69k-" Jan 24 00:45:09.457328 containerd[1607]: 2026-01-24 00:45:08.157 [INFO][4455] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2bfecc2ff1a6949dc518dd4a00471153de419120bb0e20e556da1b0a195bf993" Namespace="kube-system" Pod="coredns-66bc5c9577-wp69k" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wp69k-eth0" Jan 24 00:45:09.457328 containerd[1607]: 2026-01-24 00:45:08.648 [INFO][4471] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2bfecc2ff1a6949dc518dd4a00471153de419120bb0e20e556da1b0a195bf993" HandleID="k8s-pod-network.2bfecc2ff1a6949dc518dd4a00471153de419120bb0e20e556da1b0a195bf993" Workload="localhost-k8s-coredns--66bc5c9577--wp69k-eth0" Jan 24 00:45:09.457868 containerd[1607]: 2026-01-24 00:45:08.654 [INFO][4471] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2bfecc2ff1a6949dc518dd4a00471153de419120bb0e20e556da1b0a195bf993" HandleID="k8s-pod-network.2bfecc2ff1a6949dc518dd4a00471153de419120bb0e20e556da1b0a195bf993" Workload="localhost-k8s-coredns--66bc5c9577--wp69k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002a46c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-wp69k", "timestamp":"2026-01-24 00:45:08.648668506 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:45:09.457868 containerd[1607]: 2026-01-24 00:45:08.654 [INFO][4471] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:45:09.457868 containerd[1607]: 2026-01-24 00:45:09.048 [INFO][4471] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:45:09.457868 containerd[1607]: 2026-01-24 00:45:09.049 [INFO][4471] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:45:09.457868 containerd[1607]: 2026-01-24 00:45:09.076 [INFO][4471] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2bfecc2ff1a6949dc518dd4a00471153de419120bb0e20e556da1b0a195bf993" host="localhost" Jan 24 00:45:09.457868 containerd[1607]: 2026-01-24 00:45:09.120 [INFO][4471] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:45:09.457868 containerd[1607]: 2026-01-24 00:45:09.166 [INFO][4471] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:45:09.457868 containerd[1607]: 2026-01-24 00:45:09.211 [INFO][4471] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:45:09.457868 containerd[1607]: 2026-01-24 00:45:09.216 [INFO][4471] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:45:09.457868 containerd[1607]: 2026-01-24 00:45:09.216 [INFO][4471] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2bfecc2ff1a6949dc518dd4a00471153de419120bb0e20e556da1b0a195bf993" host="localhost" Jan 24 00:45:09.459754 containerd[1607]: 2026-01-24 00:45:09.239 [INFO][4471] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2bfecc2ff1a6949dc518dd4a00471153de419120bb0e20e556da1b0a195bf993 Jan 24 00:45:09.459754 containerd[1607]: 2026-01-24 00:45:09.278 [INFO][4471] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2bfecc2ff1a6949dc518dd4a00471153de419120bb0e20e556da1b0a195bf993" host="localhost" Jan 24 00:45:09.459754 containerd[1607]: 2026-01-24 00:45:09.347 [INFO][4471] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.2bfecc2ff1a6949dc518dd4a00471153de419120bb0e20e556da1b0a195bf993" host="localhost" Jan 24 00:45:09.459754 containerd[1607]: 2026-01-24 00:45:09.347 [INFO][4471] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.2bfecc2ff1a6949dc518dd4a00471153de419120bb0e20e556da1b0a195bf993" host="localhost" Jan 24 00:45:09.459754 containerd[1607]: 2026-01-24 00:45:09.348 [INFO][4471] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:45:09.459754 containerd[1607]: 2026-01-24 00:45:09.348 [INFO][4471] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="2bfecc2ff1a6949dc518dd4a00471153de419120bb0e20e556da1b0a195bf993" HandleID="k8s-pod-network.2bfecc2ff1a6949dc518dd4a00471153de419120bb0e20e556da1b0a195bf993" Workload="localhost-k8s-coredns--66bc5c9577--wp69k-eth0" Jan 24 00:45:09.459939 containerd[1607]: 2026-01-24 00:45:09.358 [INFO][4455] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2bfecc2ff1a6949dc518dd4a00471153de419120bb0e20e556da1b0a195bf993" Namespace="kube-system" Pod="coredns-66bc5c9577-wp69k" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wp69k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--wp69k-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"9bde79db-04de-4de4-a505-446b3c718db4", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 43, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-wp69k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali73c05912e5f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:45:09.459939 containerd[1607]: 2026-01-24 00:45:09.358 [INFO][4455] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="2bfecc2ff1a6949dc518dd4a00471153de419120bb0e20e556da1b0a195bf993" Namespace="kube-system" Pod="coredns-66bc5c9577-wp69k" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wp69k-eth0" Jan 24 00:45:09.459939 containerd[1607]: 2026-01-24 00:45:09.358 [INFO][4455] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali73c05912e5f ContainerID="2bfecc2ff1a6949dc518dd4a00471153de419120bb0e20e556da1b0a195bf993" Namespace="kube-system" Pod="coredns-66bc5c9577-wp69k" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wp69k-eth0" Jan 24 00:45:09.459939 containerd[1607]: 2026-01-24 00:45:09.381 [INFO][4455] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2bfecc2ff1a6949dc518dd4a00471153de419120bb0e20e556da1b0a195bf993" Namespace="kube-system" Pod="coredns-66bc5c9577-wp69k" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wp69k-eth0" Jan 24 00:45:09.459939 containerd[1607]: 2026-01-24 00:45:09.382 [INFO][4455] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2bfecc2ff1a6949dc518dd4a00471153de419120bb0e20e556da1b0a195bf993" Namespace="kube-system" Pod="coredns-66bc5c9577-wp69k" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wp69k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--wp69k-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"9bde79db-04de-4de4-a505-446b3c718db4", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 43, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2bfecc2ff1a6949dc518dd4a00471153de419120bb0e20e556da1b0a195bf993", Pod:"coredns-66bc5c9577-wp69k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali73c05912e5f", MAC:"22:fb:63:d9:29:c3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:45:09.459939 containerd[1607]: 2026-01-24 00:45:09.429 [INFO][4455] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2bfecc2ff1a6949dc518dd4a00471153de419120bb0e20e556da1b0a195bf993" Namespace="kube-system" Pod="coredns-66bc5c9577-wp69k" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wp69k-eth0" Jan 24 00:45:09.504355 containerd[1607]: time="2026-01-24T00:45:09.503970155Z" level=info msg="connecting to shim d34dea9aace495ae5d4fe014e15a49e3a4942eb0965f421a321b8659cea52d78" address="unix:///run/containerd/s/a5a90a32d30647d8e09d6a01b12a0a89613de09a6889cf9b6cde893fc50c66b9" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:45:09.564700 containerd[1607]: time="2026-01-24T00:45:09.564498924Z" level=info msg="connecting to shim 2bfecc2ff1a6949dc518dd4a00471153de419120bb0e20e556da1b0a195bf993" address="unix:///run/containerd/s/3d169f6398b2544e9b047ff7f816a7792cf477201a21013b36bf262e4636c17b" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:45:09.692368 systemd[1]: Started cri-containerd-d34dea9aace495ae5d4fe014e15a49e3a4942eb0965f421a321b8659cea52d78.scope - libcontainer container d34dea9aace495ae5d4fe014e15a49e3a4942eb0965f421a321b8659cea52d78. Jan 24 00:45:09.705838 systemd-networkd[1505]: calie2ef8885a24: Link UP Jan 24 00:45:09.713611 systemd[1]: Started cri-containerd-2bfecc2ff1a6949dc518dd4a00471153de419120bb0e20e556da1b0a195bf993.scope - libcontainer container 2bfecc2ff1a6949dc518dd4a00471153de419120bb0e20e556da1b0a195bf993. Jan 24 00:45:09.717704 systemd-networkd[1505]: calie2ef8885a24: Gained carrier Jan 24 00:45:09.754475 containerd[1607]: time="2026-01-24T00:45:09.754418865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5594cdc7fb-7kk8c,Uid:5a82bd01-5299-411a-9329-279ee1a3e6ef,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:45:09.802000 audit: BPF prog-id=175 op=LOAD Jan 24 00:45:09.809000 audit: BPF prog-id=176 op=LOAD Jan 24 00:45:09.809000 audit[4629]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=4615 pid=4629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:09.809000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262666563633266663161363934396463353138646434613030343731 Jan 24 00:45:09.819000 audit: BPF prog-id=176 op=UNLOAD Jan 24 00:45:09.819000 audit[4629]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4615 pid=4629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:09.819000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262666563633266663161363934396463353138646434613030343731 Jan 24 00:45:09.819000 audit: BPF prog-id=177 op=LOAD Jan 24 00:45:09.819000 audit[4629]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=4615 pid=4629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:09.819000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262666563633266663161363934396463353138646434613030343731 Jan 24 00:45:09.819000 audit: BPF prog-id=178 op=LOAD Jan 24 00:45:09.819000 audit[4629]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=4615 pid=4629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:09.819000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262666563633266663161363934396463353138646434613030343731 Jan 24 00:45:09.819000 audit: BPF prog-id=178 op=UNLOAD Jan 24 00:45:09.819000 audit[4629]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4615 pid=4629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:09.819000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262666563633266663161363934396463353138646434613030343731 Jan 24 00:45:09.819000 audit: BPF prog-id=177 op=UNLOAD Jan 24 00:45:09.819000 audit[4629]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4615 pid=4629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:09.819000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262666563633266663161363934396463353138646434613030343731 Jan 24 00:45:09.819000 audit: BPF prog-id=179 op=LOAD Jan 24 00:45:09.819000 audit[4629]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=4615 pid=4629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:09.819000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262666563633266663161363934396463353138646434613030343731 Jan 24 00:45:09.824991 systemd-resolved[1287]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:45:09.833727 containerd[1607]: 2026-01-24 00:45:08.897 [INFO][4512] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:45:09.833727 containerd[1607]: 2026-01-24 00:45:08.956 [INFO][4512] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--7zqd4-eth0 coredns-66bc5c9577- kube-system 7a4cfce0-c870-4a39-b5a8-35bde17d6784 959 0 2026-01-24 00:43:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-7zqd4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie2ef8885a24 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="c687ad0a1509d0f219708ed2b79489365f06fa5f4ae458bd24df5ae6718a042e" Namespace="kube-system" Pod="coredns-66bc5c9577-7zqd4" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--7zqd4-" Jan 24 00:45:09.833727 containerd[1607]: 2026-01-24 00:45:08.956 [INFO][4512] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c687ad0a1509d0f219708ed2b79489365f06fa5f4ae458bd24df5ae6718a042e" Namespace="kube-system" Pod="coredns-66bc5c9577-7zqd4" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--7zqd4-eth0" Jan 24 00:45:09.833727 containerd[1607]: 2026-01-24 00:45:09.150 [INFO][4537] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c687ad0a1509d0f219708ed2b79489365f06fa5f4ae458bd24df5ae6718a042e" HandleID="k8s-pod-network.c687ad0a1509d0f219708ed2b79489365f06fa5f4ae458bd24df5ae6718a042e" Workload="localhost-k8s-coredns--66bc5c9577--7zqd4-eth0" Jan 24 00:45:09.833727 containerd[1607]: 2026-01-24 00:45:09.150 [INFO][4537] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c687ad0a1509d0f219708ed2b79489365f06fa5f4ae458bd24df5ae6718a042e" HandleID="k8s-pod-network.c687ad0a1509d0f219708ed2b79489365f06fa5f4ae458bd24df5ae6718a042e" Workload="localhost-k8s-coredns--66bc5c9577--7zqd4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000512690), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-7zqd4", "timestamp":"2026-01-24 00:45:09.150413021 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:45:09.833727 containerd[1607]: 2026-01-24 00:45:09.150 [INFO][4537] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:45:09.833727 containerd[1607]: 2026-01-24 00:45:09.348 [INFO][4537] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:45:09.833727 containerd[1607]: 2026-01-24 00:45:09.351 [INFO][4537] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:45:09.833727 containerd[1607]: 2026-01-24 00:45:09.453 [INFO][4537] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c687ad0a1509d0f219708ed2b79489365f06fa5f4ae458bd24df5ae6718a042e" host="localhost" Jan 24 00:45:09.833727 containerd[1607]: 2026-01-24 00:45:09.503 [INFO][4537] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:45:09.833727 containerd[1607]: 2026-01-24 00:45:09.552 [INFO][4537] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:45:09.833727 containerd[1607]: 2026-01-24 00:45:09.559 [INFO][4537] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:45:09.833727 containerd[1607]: 2026-01-24 00:45:09.568 [INFO][4537] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:45:09.833727 containerd[1607]: 2026-01-24 00:45:09.569 [INFO][4537] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c687ad0a1509d0f219708ed2b79489365f06fa5f4ae458bd24df5ae6718a042e" host="localhost" Jan 24 00:45:09.833727 containerd[1607]: 2026-01-24 00:45:09.597 [INFO][4537] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c687ad0a1509d0f219708ed2b79489365f06fa5f4ae458bd24df5ae6718a042e Jan 24 00:45:09.833727 containerd[1607]: 2026-01-24 00:45:09.639 [INFO][4537] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c687ad0a1509d0f219708ed2b79489365f06fa5f4ae458bd24df5ae6718a042e" host="localhost" Jan 24 00:45:09.833727 containerd[1607]: 2026-01-24 00:45:09.673 [INFO][4537] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.c687ad0a1509d0f219708ed2b79489365f06fa5f4ae458bd24df5ae6718a042e" host="localhost" Jan 24 00:45:09.833727 containerd[1607]: 2026-01-24 00:45:09.674 [INFO][4537] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.c687ad0a1509d0f219708ed2b79489365f06fa5f4ae458bd24df5ae6718a042e" host="localhost" Jan 24 00:45:09.833727 containerd[1607]: 2026-01-24 00:45:09.676 [INFO][4537] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:45:09.833727 containerd[1607]: 2026-01-24 00:45:09.676 [INFO][4537] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="c687ad0a1509d0f219708ed2b79489365f06fa5f4ae458bd24df5ae6718a042e" HandleID="k8s-pod-network.c687ad0a1509d0f219708ed2b79489365f06fa5f4ae458bd24df5ae6718a042e" Workload="localhost-k8s-coredns--66bc5c9577--7zqd4-eth0" Jan 24 00:45:09.839407 containerd[1607]: 2026-01-24 00:45:09.686 [INFO][4512] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c687ad0a1509d0f219708ed2b79489365f06fa5f4ae458bd24df5ae6718a042e" Namespace="kube-system" Pod="coredns-66bc5c9577-7zqd4" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--7zqd4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--7zqd4-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"7a4cfce0-c870-4a39-b5a8-35bde17d6784", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 43, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-7zqd4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie2ef8885a24", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:45:09.839407 containerd[1607]: 2026-01-24 00:45:09.686 [INFO][4512] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="c687ad0a1509d0f219708ed2b79489365f06fa5f4ae458bd24df5ae6718a042e" Namespace="kube-system" Pod="coredns-66bc5c9577-7zqd4" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--7zqd4-eth0" Jan 24 00:45:09.839407 containerd[1607]: 2026-01-24 00:45:09.687 [INFO][4512] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie2ef8885a24 ContainerID="c687ad0a1509d0f219708ed2b79489365f06fa5f4ae458bd24df5ae6718a042e" Namespace="kube-system" Pod="coredns-66bc5c9577-7zqd4" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--7zqd4-eth0" Jan 24 00:45:09.839407 containerd[1607]: 2026-01-24 00:45:09.719 [INFO][4512] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c687ad0a1509d0f219708ed2b79489365f06fa5f4ae458bd24df5ae6718a042e" Namespace="kube-system" Pod="coredns-66bc5c9577-7zqd4" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--7zqd4-eth0" Jan 24 00:45:09.839407 containerd[1607]: 2026-01-24 00:45:09.721 [INFO][4512] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c687ad0a1509d0f219708ed2b79489365f06fa5f4ae458bd24df5ae6718a042e" Namespace="kube-system" Pod="coredns-66bc5c9577-7zqd4" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--7zqd4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--7zqd4-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"7a4cfce0-c870-4a39-b5a8-35bde17d6784", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 43, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c687ad0a1509d0f219708ed2b79489365f06fa5f4ae458bd24df5ae6718a042e", Pod:"coredns-66bc5c9577-7zqd4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie2ef8885a24", MAC:"52:d5:7f:6d:f5:76", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:45:09.839407 containerd[1607]: 2026-01-24 00:45:09.797 [INFO][4512] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c687ad0a1509d0f219708ed2b79489365f06fa5f4ae458bd24df5ae6718a042e" Namespace="kube-system" Pod="coredns-66bc5c9577-7zqd4" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--7zqd4-eth0" Jan 24 00:45:09.869032 kubelet[2869]: I0124 00:45:09.868598 2869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4771eb7f-3eb7-4b12-836b-87b2639f542e" path="/var/lib/kubelet/pods/4771eb7f-3eb7-4b12-836b-87b2639f542e/volumes" Jan 24 00:45:09.927999 containerd[1607]: time="2026-01-24T00:45:09.927893062Z" level=info msg="connecting to shim c687ad0a1509d0f219708ed2b79489365f06fa5f4ae458bd24df5ae6718a042e" address="unix:///run/containerd/s/93b68906b63e70e034fe1226687207ac6d0e7b7f4e1e91b6c0c02f50baa63639" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:45:09.956000 audit: BPF prog-id=180 op=LOAD Jan 24 00:45:09.958000 audit: BPF prog-id=181 op=LOAD Jan 24 00:45:09.958000 audit[4618]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4595 pid=4618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:09.958000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433346465613961616365343935616535643466653031346531356134 Jan 24 00:45:09.963000 audit: BPF prog-id=181 op=UNLOAD Jan 24 00:45:09.963000 audit[4618]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4595 pid=4618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:09.963000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433346465613961616365343935616535643466653031346531356134 Jan 24 00:45:09.967000 audit: BPF prog-id=182 op=LOAD Jan 24 00:45:09.967000 audit[4618]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4595 pid=4618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:09.967000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433346465613961616365343935616535643466653031346531356134 Jan 24 00:45:09.970000 audit: BPF prog-id=183 op=LOAD Jan 24 00:45:09.970000 audit[4618]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4595 pid=4618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:09.970000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433346465613961616365343935616535643466653031346531356134 Jan 24 00:45:09.970000 audit: BPF prog-id=183 op=UNLOAD Jan 24 00:45:09.970000 audit[4618]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4595 pid=4618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:09.970000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433346465613961616365343935616535643466653031346531356134 Jan 24 00:45:09.970000 audit: BPF prog-id=182 op=UNLOAD Jan 24 00:45:09.970000 audit[4618]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4595 pid=4618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:09.970000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433346465613961616365343935616535643466653031346531356134 Jan 24 00:45:09.970000 audit: BPF prog-id=184 op=LOAD Jan 24 00:45:09.970000 audit[4618]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4595 pid=4618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:09.970000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433346465613961616365343935616535643466653031346531356134 Jan 24 00:45:09.974934 systemd-resolved[1287]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:45:10.062704 containerd[1607]: time="2026-01-24T00:45:10.062519326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wp69k,Uid:9bde79db-04de-4de4-a505-446b3c718db4,Namespace:kube-system,Attempt:0,} returns sandbox id \"2bfecc2ff1a6949dc518dd4a00471153de419120bb0e20e556da1b0a195bf993\"" Jan 24 00:45:10.064857 kubelet[2869]: E0124 00:45:10.064827 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:45:10.065900 systemd[1]: Started cri-containerd-c687ad0a1509d0f219708ed2b79489365f06fa5f4ae458bd24df5ae6718a042e.scope - libcontainer container c687ad0a1509d0f219708ed2b79489365f06fa5f4ae458bd24df5ae6718a042e. Jan 24 00:45:10.074382 containerd[1607]: time="2026-01-24T00:45:10.073954710Z" level=info msg="CreateContainer within sandbox \"2bfecc2ff1a6949dc518dd4a00471153de419120bb0e20e556da1b0a195bf993\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:45:10.114000 audit: BPF prog-id=185 op=LOAD Jan 24 00:45:10.118000 audit: BPF prog-id=186 op=LOAD Jan 24 00:45:10.118000 audit[4709]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4695 pid=4709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:10.118000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336383761643061313530396430663231393730386564326237393438 Jan 24 00:45:10.118000 audit: BPF prog-id=186 op=UNLOAD Jan 24 00:45:10.118000 audit[4709]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4695 pid=4709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:10.118000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336383761643061313530396430663231393730386564326237393438 Jan 24 00:45:10.119000 audit: BPF prog-id=187 op=LOAD Jan 24 00:45:10.119000 audit[4709]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4695 pid=4709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:10.119000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336383761643061313530396430663231393730386564326237393438 Jan 24 00:45:10.120000 audit: BPF prog-id=188 op=LOAD Jan 24 00:45:10.120000 audit[4709]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4695 pid=4709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:10.120000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336383761643061313530396430663231393730386564326237393438 Jan 24 00:45:10.126000 audit: BPF prog-id=188 op=UNLOAD Jan 24 00:45:10.126000 audit[4709]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4695 pid=4709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:10.126000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336383761643061313530396430663231393730386564326237393438 Jan 24 00:45:10.126000 audit: BPF prog-id=187 op=UNLOAD Jan 24 00:45:10.126000 audit[4709]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4695 pid=4709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:10.126000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336383761643061313530396430663231393730386564326237393438 Jan 24 00:45:10.126000 audit: BPF prog-id=189 op=LOAD Jan 24 00:45:10.126000 audit[4709]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4695 pid=4709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:10.126000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336383761643061313530396430663231393730386564326237393438 Jan 24 00:45:10.139378 systemd-resolved[1287]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:45:10.174613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4293377212.mount: Deactivated successfully. Jan 24 00:45:10.185564 containerd[1607]: time="2026-01-24T00:45:10.184822237Z" level=info msg="Container 9a0fc3268f57da328a594ece0449b2d33c85cabd013a63c9ce7745ad5112a6c6: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:45:10.320722 systemd-networkd[1505]: cali12496998970: Link UP Jan 24 00:45:10.337413 systemd-networkd[1505]: cali12496998970: Gained carrier Jan 24 00:45:10.349547 containerd[1607]: time="2026-01-24T00:45:10.348924432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5594cdc7fb-c8l7f,Uid:050a17cf-0e04-46c0-ad64-4ce3987ef3d5,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d34dea9aace495ae5d4fe014e15a49e3a4942eb0965f421a321b8659cea52d78\"" Jan 24 00:45:10.370390 containerd[1607]: time="2026-01-24T00:45:10.370056626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:45:10.382626 containerd[1607]: time="2026-01-24T00:45:10.382475629Z" level=info msg="CreateContainer within sandbox \"2bfecc2ff1a6949dc518dd4a00471153de419120bb0e20e556da1b0a195bf993\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9a0fc3268f57da328a594ece0449b2d33c85cabd013a63c9ce7745ad5112a6c6\"" Jan 24 00:45:10.390983 containerd[1607]: time="2026-01-24T00:45:10.390873309Z" level=info msg="StartContainer for \"9a0fc3268f57da328a594ece0449b2d33c85cabd013a63c9ce7745ad5112a6c6\"" Jan 24 00:45:10.416430 containerd[1607]: time="2026-01-24T00:45:10.416295984Z" level=info msg="connecting to shim 9a0fc3268f57da328a594ece0449b2d33c85cabd013a63c9ce7745ad5112a6c6" address="unix:///run/containerd/s/3d169f6398b2544e9b047ff7f816a7792cf477201a21013b36bf262e4636c17b" protocol=ttrpc version=3 Jan 24 00:45:10.464610 containerd[1607]: time="2026-01-24T00:45:10.464474554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7zqd4,Uid:7a4cfce0-c870-4a39-b5a8-35bde17d6784,Namespace:kube-system,Attempt:0,} returns sandbox id \"c687ad0a1509d0f219708ed2b79489365f06fa5f4ae458bd24df5ae6718a042e\"" Jan 24 00:45:10.467901 kubelet[2869]: E0124 00:45:10.466420 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:45:10.505047 containerd[1607]: time="2026-01-24T00:45:10.504941898Z" level=info msg="CreateContainer within sandbox \"c687ad0a1509d0f219708ed2b79489365f06fa5f4ae458bd24df5ae6718a042e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:45:10.518413 containerd[1607]: 2026-01-24 00:45:09.263 [INFO][4551] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:45:10.518413 containerd[1607]: 2026-01-24 00:45:09.323 [INFO][4551] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5f8f47959d--9fk7m-eth0 whisker-5f8f47959d- calico-system e041bbba-486b-4bf8-b212-ca4fbb2d4a57 1091 0 2026-01-24 00:45:08 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5f8f47959d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5f8f47959d-9fk7m eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali12496998970 [] [] }} ContainerID="c3970e8052cd9e40aa454d19a0dfa17cf0cbbe7dd3480ea9de7f524bff775359" Namespace="calico-system" Pod="whisker-5f8f47959d-9fk7m" WorkloadEndpoint="localhost-k8s-whisker--5f8f47959d--9fk7m-" Jan 24 00:45:10.518413 containerd[1607]: 2026-01-24 00:45:09.323 [INFO][4551] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c3970e8052cd9e40aa454d19a0dfa17cf0cbbe7dd3480ea9de7f524bff775359" Namespace="calico-system" Pod="whisker-5f8f47959d-9fk7m" WorkloadEndpoint="localhost-k8s-whisker--5f8f47959d--9fk7m-eth0" Jan 24 00:45:10.518413 containerd[1607]: 2026-01-24 00:45:09.513 [INFO][4575] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c3970e8052cd9e40aa454d19a0dfa17cf0cbbe7dd3480ea9de7f524bff775359" HandleID="k8s-pod-network.c3970e8052cd9e40aa454d19a0dfa17cf0cbbe7dd3480ea9de7f524bff775359" Workload="localhost-k8s-whisker--5f8f47959d--9fk7m-eth0" Jan 24 00:45:10.518413 containerd[1607]: 2026-01-24 00:45:09.518 [INFO][4575] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c3970e8052cd9e40aa454d19a0dfa17cf0cbbe7dd3480ea9de7f524bff775359" HandleID="k8s-pod-network.c3970e8052cd9e40aa454d19a0dfa17cf0cbbe7dd3480ea9de7f524bff775359" Workload="localhost-k8s-whisker--5f8f47959d--9fk7m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004edc0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5f8f47959d-9fk7m", "timestamp":"2026-01-24 00:45:09.513841089 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:45:10.518413 containerd[1607]: 2026-01-24 00:45:09.519 [INFO][4575] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:45:10.518413 containerd[1607]: 2026-01-24 00:45:09.674 [INFO][4575] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:45:10.518413 containerd[1607]: 2026-01-24 00:45:09.675 [INFO][4575] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:45:10.518413 containerd[1607]: 2026-01-24 00:45:09.748 [INFO][4575] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c3970e8052cd9e40aa454d19a0dfa17cf0cbbe7dd3480ea9de7f524bff775359" host="localhost" Jan 24 00:45:10.518413 containerd[1607]: 2026-01-24 00:45:09.831 [INFO][4575] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:45:10.518413 containerd[1607]: 2026-01-24 00:45:09.875 [INFO][4575] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:45:10.518413 containerd[1607]: 2026-01-24 00:45:09.895 [INFO][4575] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:45:10.518413 containerd[1607]: 2026-01-24 00:45:09.913 [INFO][4575] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:45:10.518413 containerd[1607]: 2026-01-24 00:45:09.913 [INFO][4575] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c3970e8052cd9e40aa454d19a0dfa17cf0cbbe7dd3480ea9de7f524bff775359" host="localhost" Jan 24 00:45:10.518413 containerd[1607]: 2026-01-24 00:45:09.942 [INFO][4575] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c3970e8052cd9e40aa454d19a0dfa17cf0cbbe7dd3480ea9de7f524bff775359 Jan 24 00:45:10.518413 containerd[1607]: 2026-01-24 00:45:09.979 [INFO][4575] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c3970e8052cd9e40aa454d19a0dfa17cf0cbbe7dd3480ea9de7f524bff775359" host="localhost" Jan 24 00:45:10.518413 containerd[1607]: 2026-01-24 00:45:10.068 [INFO][4575] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.c3970e8052cd9e40aa454d19a0dfa17cf0cbbe7dd3480ea9de7f524bff775359" host="localhost" Jan 24 00:45:10.518413 containerd[1607]: 2026-01-24 00:45:10.091 [INFO][4575] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.c3970e8052cd9e40aa454d19a0dfa17cf0cbbe7dd3480ea9de7f524bff775359" host="localhost" Jan 24 00:45:10.518413 containerd[1607]: 2026-01-24 00:45:10.098 [INFO][4575] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:45:10.518413 containerd[1607]: 2026-01-24 00:45:10.101 [INFO][4575] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="c3970e8052cd9e40aa454d19a0dfa17cf0cbbe7dd3480ea9de7f524bff775359" HandleID="k8s-pod-network.c3970e8052cd9e40aa454d19a0dfa17cf0cbbe7dd3480ea9de7f524bff775359" Workload="localhost-k8s-whisker--5f8f47959d--9fk7m-eth0" Jan 24 00:45:10.520708 containerd[1607]: 2026-01-24 00:45:10.214 [INFO][4551] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c3970e8052cd9e40aa454d19a0dfa17cf0cbbe7dd3480ea9de7f524bff775359" Namespace="calico-system" Pod="whisker-5f8f47959d-9fk7m" WorkloadEndpoint="localhost-k8s-whisker--5f8f47959d--9fk7m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5f8f47959d--9fk7m-eth0", GenerateName:"whisker-5f8f47959d-", Namespace:"calico-system", SelfLink:"", UID:"e041bbba-486b-4bf8-b212-ca4fbb2d4a57", ResourceVersion:"1091", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 45, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5f8f47959d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5f8f47959d-9fk7m", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali12496998970", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:45:10.520708 containerd[1607]: 2026-01-24 00:45:10.214 [INFO][4551] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="c3970e8052cd9e40aa454d19a0dfa17cf0cbbe7dd3480ea9de7f524bff775359" Namespace="calico-system" Pod="whisker-5f8f47959d-9fk7m" WorkloadEndpoint="localhost-k8s-whisker--5f8f47959d--9fk7m-eth0" Jan 24 00:45:10.520708 containerd[1607]: 2026-01-24 00:45:10.214 [INFO][4551] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali12496998970 ContainerID="c3970e8052cd9e40aa454d19a0dfa17cf0cbbe7dd3480ea9de7f524bff775359" Namespace="calico-system" Pod="whisker-5f8f47959d-9fk7m" WorkloadEndpoint="localhost-k8s-whisker--5f8f47959d--9fk7m-eth0" Jan 24 00:45:10.520708 containerd[1607]: 2026-01-24 00:45:10.348 [INFO][4551] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c3970e8052cd9e40aa454d19a0dfa17cf0cbbe7dd3480ea9de7f524bff775359" Namespace="calico-system" Pod="whisker-5f8f47959d-9fk7m" WorkloadEndpoint="localhost-k8s-whisker--5f8f47959d--9fk7m-eth0" Jan 24 00:45:10.520708 containerd[1607]: 2026-01-24 00:45:10.356 [INFO][4551] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c3970e8052cd9e40aa454d19a0dfa17cf0cbbe7dd3480ea9de7f524bff775359" Namespace="calico-system" Pod="whisker-5f8f47959d-9fk7m" WorkloadEndpoint="localhost-k8s-whisker--5f8f47959d--9fk7m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5f8f47959d--9fk7m-eth0", GenerateName:"whisker-5f8f47959d-", Namespace:"calico-system", SelfLink:"", UID:"e041bbba-486b-4bf8-b212-ca4fbb2d4a57", ResourceVersion:"1091", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 45, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5f8f47959d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c3970e8052cd9e40aa454d19a0dfa17cf0cbbe7dd3480ea9de7f524bff775359", Pod:"whisker-5f8f47959d-9fk7m", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali12496998970", MAC:"da:15:ce:a9:60:28", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:45:10.520708 containerd[1607]: 2026-01-24 00:45:10.452 [INFO][4551] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c3970e8052cd9e40aa454d19a0dfa17cf0cbbe7dd3480ea9de7f524bff775359" Namespace="calico-system" Pod="whisker-5f8f47959d-9fk7m" WorkloadEndpoint="localhost-k8s-whisker--5f8f47959d--9fk7m-eth0" Jan 24 00:45:10.623939 containerd[1607]: time="2026-01-24T00:45:10.623807037Z" level=info msg="Container 8166f337a89228b4f72a61324b00c368f2608f06ebab60e52bfb0c1dd90615b7: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:45:10.669388 containerd[1607]: time="2026-01-24T00:45:10.667386273Z" level=info msg="connecting to shim c3970e8052cd9e40aa454d19a0dfa17cf0cbbe7dd3480ea9de7f524bff775359" address="unix:///run/containerd/s/6b8208bb957a285cb281175b529f02978dd9dc351866420d7b0888420d999610" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:45:10.688946 containerd[1607]: time="2026-01-24T00:45:10.683697102Z" level=info msg="CreateContainer within sandbox \"c687ad0a1509d0f219708ed2b79489365f06fa5f4ae458bd24df5ae6718a042e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8166f337a89228b4f72a61324b00c368f2608f06ebab60e52bfb0c1dd90615b7\"" Jan 24 00:45:10.691806 containerd[1607]: time="2026-01-24T00:45:10.691767783Z" level=info msg="StartContainer for \"8166f337a89228b4f72a61324b00c368f2608f06ebab60e52bfb0c1dd90615b7\"" Jan 24 00:45:10.718915 systemd[1]: Started cri-containerd-9a0fc3268f57da328a594ece0449b2d33c85cabd013a63c9ce7745ad5112a6c6.scope - libcontainer container 9a0fc3268f57da328a594ece0449b2d33c85cabd013a63c9ce7745ad5112a6c6. Jan 24 00:45:10.726867 containerd[1607]: time="2026-01-24T00:45:10.726766435Z" level=info msg="connecting to shim 8166f337a89228b4f72a61324b00c368f2608f06ebab60e52bfb0c1dd90615b7" address="unix:///run/containerd/s/93b68906b63e70e034fe1226687207ac6d0e7b7f4e1e91b6c0c02f50baa63639" protocol=ttrpc version=3 Jan 24 00:45:10.743838 kubelet[2869]: E0124 00:45:10.742922 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:45:10.743966 containerd[1607]: time="2026-01-24T00:45:10.743671242Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:45:10.805637 containerd[1607]: time="2026-01-24T00:45:10.805542573Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:45:10.808376 containerd[1607]: time="2026-01-24T00:45:10.805667786Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:45:10.808484 kubelet[2869]: E0124 00:45:10.805880 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:45:10.811034 kubelet[2869]: E0124 00:45:10.809936 2869 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:45:10.811034 kubelet[2869]: E0124 00:45:10.810063 2869 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5594cdc7fb-c8l7f_calico-apiserver(050a17cf-0e04-46c0-ad64-4ce3987ef3d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:45:10.811034 kubelet[2869]: E0124 00:45:10.810115 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5594cdc7fb-c8l7f" podUID="050a17cf-0e04-46c0-ad64-4ce3987ef3d5" Jan 24 00:45:10.875489 systemd-networkd[1505]: calib6c5b11d811: Gained IPv6LL Jan 24 00:45:10.920397 systemd[1]: Started cri-containerd-8166f337a89228b4f72a61324b00c368f2608f06ebab60e52bfb0c1dd90615b7.scope - libcontainer container 8166f337a89228b4f72a61324b00c368f2608f06ebab60e52bfb0c1dd90615b7. Jan 24 00:45:10.943125 systemd-networkd[1505]: calie2ef8885a24: Gained IPv6LL Jan 24 00:45:10.946000 audit: BPF prog-id=190 op=LOAD Jan 24 00:45:10.950000 audit: BPF prog-id=191 op=LOAD Jan 24 00:45:10.950000 audit[4820]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=4615 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:10.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961306663333236386635376461333238613539346563653034343962 Jan 24 00:45:10.950000 audit: BPF prog-id=191 op=UNLOAD Jan 24 00:45:10.950000 audit[4820]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4615 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:10.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961306663333236386635376461333238613539346563653034343962 Jan 24 00:45:10.950000 audit: BPF prog-id=192 op=LOAD Jan 24 00:45:10.950000 audit[4820]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=4615 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:10.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961306663333236386635376461333238613539346563653034343962 Jan 24 00:45:10.951000 audit: BPF prog-id=193 op=LOAD Jan 24 00:45:10.951000 audit[4820]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=4615 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:10.951000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961306663333236386635376461333238613539346563653034343962 Jan 24 00:45:10.951000 audit: BPF prog-id=193 op=UNLOAD Jan 24 00:45:10.951000 audit[4820]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4615 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:10.951000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961306663333236386635376461333238613539346563653034343962 Jan 24 00:45:10.951000 audit: BPF prog-id=192 op=UNLOAD Jan 24 00:45:10.951000 audit[4820]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4615 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:10.951000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961306663333236386635376461333238613539346563653034343962 Jan 24 00:45:10.951000 audit: BPF prog-id=194 op=LOAD Jan 24 00:45:10.951000 audit[4820]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=4615 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:10.951000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961306663333236386635376461333238613539346563653034343962 Jan 24 00:45:10.974096 systemd[1]: Started cri-containerd-c3970e8052cd9e40aa454d19a0dfa17cf0cbbe7dd3480ea9de7f524bff775359.scope - libcontainer container c3970e8052cd9e40aa454d19a0dfa17cf0cbbe7dd3480ea9de7f524bff775359. Jan 24 00:45:11.003817 systemd-networkd[1505]: cali73c05912e5f: Gained IPv6LL Jan 24 00:45:11.106454 kernel: kauditd_printk_skb: 93 callbacks suppressed Jan 24 00:45:11.106589 kernel: audit: type=1334 audit(1769215511.095:616): prog-id=195 op=LOAD Jan 24 00:45:11.095000 audit: BPF prog-id=195 op=LOAD Jan 24 00:45:11.123040 systemd-networkd[1505]: cali8bf4927ed6c: Link UP Jan 24 00:45:11.136568 systemd-networkd[1505]: cali8bf4927ed6c: Gained carrier Jan 24 00:45:11.136000 audit: BPF prog-id=196 op=LOAD Jan 24 00:45:11.136000 audit[4889]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=4695 pid=4889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:11.141421 kernel: audit: type=1334 audit(1769215511.136:617): prog-id=196 op=LOAD Jan 24 00:45:11.141481 kernel: audit: type=1300 audit(1769215511.136:617): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=4695 pid=4889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:11.141531 kernel: audit: type=1327 audit(1769215511.136:617): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831363666333337613839323238623466373261363133323462303063 Jan 24 00:45:11.141562 kernel: audit: type=1334 audit(1769215511.136:618): prog-id=196 op=UNLOAD Jan 24 00:45:11.141607 kernel: audit: type=1300 audit(1769215511.136:618): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4695 pid=4889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:11.141641 kernel: audit: type=1327 audit(1769215511.136:618): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831363666333337613839323238623466373261363133323462303063 Jan 24 00:45:11.141671 kernel: audit: type=1334 audit(1769215511.136:619): prog-id=197 op=LOAD Jan 24 00:45:11.141704 kernel: audit: type=1300 audit(1769215511.136:619): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=4695 pid=4889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:11.141741 kernel: audit: type=1327 audit(1769215511.136:619): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831363666333337613839323238623466373261363133323462303063 Jan 24 00:45:11.136000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831363666333337613839323238623466373261363133323462303063 Jan 24 00:45:11.136000 audit: BPF prog-id=196 op=UNLOAD Jan 24 00:45:11.136000 audit[4889]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4695 pid=4889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:11.136000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831363666333337613839323238623466373261363133323462303063 Jan 24 00:45:11.136000 audit: BPF prog-id=197 op=LOAD Jan 24 00:45:11.136000 audit[4889]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=4695 pid=4889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:11.136000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831363666333337613839323238623466373261363133323462303063 Jan 24 00:45:11.136000 audit: BPF prog-id=198 op=LOAD Jan 24 00:45:11.136000 audit[4889]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=4695 pid=4889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:11.136000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831363666333337613839323238623466373261363133323462303063 Jan 24 00:45:11.137000 audit: BPF prog-id=198 op=UNLOAD Jan 24 00:45:11.137000 audit[4889]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4695 pid=4889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:11.137000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831363666333337613839323238623466373261363133323462303063 Jan 24 00:45:11.137000 audit: BPF prog-id=197 op=UNLOAD Jan 24 00:45:11.137000 audit[4889]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4695 pid=4889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:11.137000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831363666333337613839323238623466373261363133323462303063 Jan 24 00:45:11.137000 audit: BPF prog-id=199 op=LOAD Jan 24 00:45:11.137000 audit[4889]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=4695 pid=4889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:11.137000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831363666333337613839323238623466373261363133323462303063 Jan 24 00:45:11.220000 audit: BPF prog-id=200 op=LOAD Jan 24 00:45:11.227000 audit: BPF prog-id=201 op=LOAD Jan 24 00:45:11.227000 audit[4888]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001e0238 a2=98 a3=0 items=0 ppid=4870 pid=4888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:11.227000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333393730653830353263643965343061613435346431396130646661 Jan 24 00:45:11.227000 audit: BPF prog-id=201 op=UNLOAD Jan 24 00:45:11.227000 audit[4888]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4870 pid=4888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:11.227000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333393730653830353263643965343061613435346431396130646661 Jan 24 00:45:11.228000 audit: BPF prog-id=202 op=LOAD Jan 24 00:45:11.228000 audit[4888]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001e0488 a2=98 a3=0 items=0 ppid=4870 pid=4888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:11.228000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333393730653830353263643965343061613435346431396130646661 Jan 24 00:45:11.235000 audit: BPF prog-id=203 op=LOAD Jan 24 00:45:11.235000 audit[4888]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001e0218 a2=98 a3=0 items=0 ppid=4870 pid=4888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:11.235000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333393730653830353263643965343061613435346431396130646661 Jan 24 00:45:11.235000 audit: BPF prog-id=203 op=UNLOAD Jan 24 00:45:11.235000 audit[4888]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4870 pid=4888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:11.235000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333393730653830353263643965343061613435346431396130646661 Jan 24 00:45:11.235000 audit: BPF prog-id=202 op=UNLOAD Jan 24 00:45:11.235000 audit[4888]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4870 pid=4888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:11.235000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333393730653830353263643965343061613435346431396130646661 Jan 24 00:45:11.235000 audit: BPF prog-id=204 op=LOAD Jan 24 00:45:11.235000 audit[4888]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001e06e8 a2=98 a3=0 items=0 ppid=4870 pid=4888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:11.235000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333393730653830353263643965343061613435346431396130646661 Jan 24 00:45:11.262836 systemd-resolved[1287]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:45:11.354896 kubelet[2869]: E0124 00:45:11.352624 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5594cdc7fb-c8l7f" podUID="050a17cf-0e04-46c0-ad64-4ce3987ef3d5" Jan 24 00:45:11.381788 containerd[1607]: 2026-01-24 00:45:09.935 [INFO][4669] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:45:11.381788 containerd[1607]: 2026-01-24 00:45:10.018 [INFO][4669] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5594cdc7fb--7kk8c-eth0 calico-apiserver-5594cdc7fb- calico-apiserver 5a82bd01-5299-411a-9329-279ee1a3e6ef 961 0 2026-01-24 00:44:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5594cdc7fb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5594cdc7fb-7kk8c eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8bf4927ed6c [] [] }} ContainerID="cc52f0df5e10f311cd6317e1ff1b138de1d660c89bb7646c08c75e4d75196b32" Namespace="calico-apiserver" Pod="calico-apiserver-5594cdc7fb-7kk8c" WorkloadEndpoint="localhost-k8s-calico--apiserver--5594cdc7fb--7kk8c-" Jan 24 00:45:11.381788 containerd[1607]: 2026-01-24 00:45:10.019 [INFO][4669] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cc52f0df5e10f311cd6317e1ff1b138de1d660c89bb7646c08c75e4d75196b32" Namespace="calico-apiserver" Pod="calico-apiserver-5594cdc7fb-7kk8c" WorkloadEndpoint="localhost-k8s-calico--apiserver--5594cdc7fb--7kk8c-eth0" Jan 24 00:45:11.381788 containerd[1607]: 2026-01-24 00:45:10.505 [INFO][4727] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cc52f0df5e10f311cd6317e1ff1b138de1d660c89bb7646c08c75e4d75196b32" HandleID="k8s-pod-network.cc52f0df5e10f311cd6317e1ff1b138de1d660c89bb7646c08c75e4d75196b32" Workload="localhost-k8s-calico--apiserver--5594cdc7fb--7kk8c-eth0" Jan 24 00:45:11.381788 containerd[1607]: 2026-01-24 00:45:10.522 [INFO][4727] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cc52f0df5e10f311cd6317e1ff1b138de1d660c89bb7646c08c75e4d75196b32" HandleID="k8s-pod-network.cc52f0df5e10f311cd6317e1ff1b138de1d660c89bb7646c08c75e4d75196b32" Workload="localhost-k8s-calico--apiserver--5594cdc7fb--7kk8c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f310), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5594cdc7fb-7kk8c", "timestamp":"2026-01-24 00:45:10.505927156 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:45:11.381788 containerd[1607]: 2026-01-24 00:45:10.538 [INFO][4727] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:45:11.381788 containerd[1607]: 2026-01-24 00:45:10.538 [INFO][4727] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:45:11.381788 containerd[1607]: 2026-01-24 00:45:10.538 [INFO][4727] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:45:11.381788 containerd[1607]: 2026-01-24 00:45:10.615 [INFO][4727] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cc52f0df5e10f311cd6317e1ff1b138de1d660c89bb7646c08c75e4d75196b32" host="localhost" Jan 24 00:45:11.381788 containerd[1607]: 2026-01-24 00:45:10.688 [INFO][4727] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:45:11.381788 containerd[1607]: 2026-01-24 00:45:10.738 [INFO][4727] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:45:11.381788 containerd[1607]: 2026-01-24 00:45:10.782 [INFO][4727] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:45:11.381788 containerd[1607]: 2026-01-24 00:45:10.814 [INFO][4727] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:45:11.381788 containerd[1607]: 2026-01-24 00:45:10.814 [INFO][4727] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cc52f0df5e10f311cd6317e1ff1b138de1d660c89bb7646c08c75e4d75196b32" host="localhost" Jan 24 00:45:11.381788 containerd[1607]: 2026-01-24 00:45:10.852 [INFO][4727] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cc52f0df5e10f311cd6317e1ff1b138de1d660c89bb7646c08c75e4d75196b32 Jan 24 00:45:11.381788 containerd[1607]: 2026-01-24 00:45:10.882 [INFO][4727] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cc52f0df5e10f311cd6317e1ff1b138de1d660c89bb7646c08c75e4d75196b32" host="localhost" Jan 24 00:45:11.381788 containerd[1607]: 2026-01-24 00:45:10.923 [INFO][4727] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.cc52f0df5e10f311cd6317e1ff1b138de1d660c89bb7646c08c75e4d75196b32" host="localhost" Jan 24 00:45:11.381788 containerd[1607]: 2026-01-24 00:45:10.923 [INFO][4727] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.cc52f0df5e10f311cd6317e1ff1b138de1d660c89bb7646c08c75e4d75196b32" host="localhost" Jan 24 00:45:11.381788 containerd[1607]: 2026-01-24 00:45:10.923 [INFO][4727] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:45:11.381788 containerd[1607]: 2026-01-24 00:45:10.924 [INFO][4727] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="cc52f0df5e10f311cd6317e1ff1b138de1d660c89bb7646c08c75e4d75196b32" HandleID="k8s-pod-network.cc52f0df5e10f311cd6317e1ff1b138de1d660c89bb7646c08c75e4d75196b32" Workload="localhost-k8s-calico--apiserver--5594cdc7fb--7kk8c-eth0" Jan 24 00:45:11.392336 containerd[1607]: 2026-01-24 00:45:11.047 [INFO][4669] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cc52f0df5e10f311cd6317e1ff1b138de1d660c89bb7646c08c75e4d75196b32" Namespace="calico-apiserver" Pod="calico-apiserver-5594cdc7fb-7kk8c" WorkloadEndpoint="localhost-k8s-calico--apiserver--5594cdc7fb--7kk8c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5594cdc7fb--7kk8c-eth0", GenerateName:"calico-apiserver-5594cdc7fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"5a82bd01-5299-411a-9329-279ee1a3e6ef", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 44, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5594cdc7fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5594cdc7fb-7kk8c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8bf4927ed6c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:45:11.392336 containerd[1607]: 2026-01-24 00:45:11.047 [INFO][4669] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="cc52f0df5e10f311cd6317e1ff1b138de1d660c89bb7646c08c75e4d75196b32" Namespace="calico-apiserver" Pod="calico-apiserver-5594cdc7fb-7kk8c" WorkloadEndpoint="localhost-k8s-calico--apiserver--5594cdc7fb--7kk8c-eth0" Jan 24 00:45:11.392336 containerd[1607]: 2026-01-24 00:45:11.047 [INFO][4669] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8bf4927ed6c ContainerID="cc52f0df5e10f311cd6317e1ff1b138de1d660c89bb7646c08c75e4d75196b32" Namespace="calico-apiserver" Pod="calico-apiserver-5594cdc7fb-7kk8c" WorkloadEndpoint="localhost-k8s-calico--apiserver--5594cdc7fb--7kk8c-eth0" Jan 24 00:45:11.392336 containerd[1607]: 2026-01-24 00:45:11.150 [INFO][4669] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cc52f0df5e10f311cd6317e1ff1b138de1d660c89bb7646c08c75e4d75196b32" Namespace="calico-apiserver" Pod="calico-apiserver-5594cdc7fb-7kk8c" WorkloadEndpoint="localhost-k8s-calico--apiserver--5594cdc7fb--7kk8c-eth0" Jan 24 00:45:11.392336 containerd[1607]: 2026-01-24 00:45:11.190 [INFO][4669] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cc52f0df5e10f311cd6317e1ff1b138de1d660c89bb7646c08c75e4d75196b32" Namespace="calico-apiserver" Pod="calico-apiserver-5594cdc7fb-7kk8c" WorkloadEndpoint="localhost-k8s-calico--apiserver--5594cdc7fb--7kk8c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5594cdc7fb--7kk8c-eth0", GenerateName:"calico-apiserver-5594cdc7fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"5a82bd01-5299-411a-9329-279ee1a3e6ef", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 44, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5594cdc7fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cc52f0df5e10f311cd6317e1ff1b138de1d660c89bb7646c08c75e4d75196b32", Pod:"calico-apiserver-5594cdc7fb-7kk8c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8bf4927ed6c", MAC:"fa:ee:5f:12:fe:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:45:11.392336 containerd[1607]: 2026-01-24 00:45:11.291 [INFO][4669] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cc52f0df5e10f311cd6317e1ff1b138de1d660c89bb7646c08c75e4d75196b32" Namespace="calico-apiserver" Pod="calico-apiserver-5594cdc7fb-7kk8c" WorkloadEndpoint="localhost-k8s-calico--apiserver--5594cdc7fb--7kk8c-eth0" Jan 24 00:45:11.479957 containerd[1607]: time="2026-01-24T00:45:11.479859019Z" level=info msg="StartContainer for \"8166f337a89228b4f72a61324b00c368f2608f06ebab60e52bfb0c1dd90615b7\" returns successfully" Jan 24 00:45:11.518657 containerd[1607]: time="2026-01-24T00:45:11.517720359Z" level=info msg="StartContainer for \"9a0fc3268f57da328a594ece0449b2d33c85cabd013a63c9ce7745ad5112a6c6\" returns successfully" Jan 24 00:45:11.584000 audit[4972]: NETFILTER_CFG table=filter:119 family=2 entries=20 op=nft_register_rule pid=4972 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:45:11.584000 audit[4972]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdbd9df360 a2=0 a3=7ffdbd9df34c items=0 ppid=3027 pid=4972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:11.584000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:45:11.591000 audit[4972]: NETFILTER_CFG table=nat:120 family=2 entries=14 op=nft_register_rule pid=4972 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:45:11.591000 audit[4972]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdbd9df360 a2=0 a3=0 items=0 ppid=3027 pid=4972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:11.591000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:45:11.654432 containerd[1607]: time="2026-01-24T00:45:11.652787072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f8f47959d-9fk7m,Uid:e041bbba-486b-4bf8-b212-ca4fbb2d4a57,Namespace:calico-system,Attempt:0,} returns sandbox id \"c3970e8052cd9e40aa454d19a0dfa17cf0cbbe7dd3480ea9de7f524bff775359\"" Jan 24 00:45:11.664318 containerd[1607]: time="2026-01-24T00:45:11.663131948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:45:11.687005 containerd[1607]: time="2026-01-24T00:45:11.686912162Z" level=info msg="connecting to shim cc52f0df5e10f311cd6317e1ff1b138de1d660c89bb7646c08c75e4d75196b32" address="unix:///run/containerd/s/728b1226b93dd7a3040d7c1c2073e9934dbbf4d08c3000e577de4cec1f3e4adb" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:45:11.756561 containerd[1607]: time="2026-01-24T00:45:11.756480872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-j2nlt,Uid:0329b08b-e4ed-4b35-88d7-60baae652219,Namespace:calico-system,Attempt:0,}" Jan 24 00:45:11.773841 containerd[1607]: time="2026-01-24T00:45:11.773512450Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:45:11.794553 containerd[1607]: time="2026-01-24T00:45:11.792595511Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:45:11.795021 containerd[1607]: time="2026-01-24T00:45:11.793822019Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 24 00:45:11.801361 kubelet[2869]: E0124 00:45:11.801089 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:45:11.801480 kubelet[2869]: E0124 00:45:11.801380 2869 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:45:11.801580 kubelet[2869]: E0124 00:45:11.801545 2869 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5f8f47959d-9fk7m_calico-system(e041bbba-486b-4bf8-b212-ca4fbb2d4a57): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:45:11.804699 containerd[1607]: time="2026-01-24T00:45:11.804412281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:45:11.965440 systemd-networkd[1505]: cali12496998970: Gained IPv6LL Jan 24 00:45:12.016081 containerd[1607]: time="2026-01-24T00:45:12.012909781Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:45:12.029712 systemd[1]: Started cri-containerd-cc52f0df5e10f311cd6317e1ff1b138de1d660c89bb7646c08c75e4d75196b32.scope - libcontainer container cc52f0df5e10f311cd6317e1ff1b138de1d660c89bb7646c08c75e4d75196b32. Jan 24 00:45:12.036353 containerd[1607]: time="2026-01-24T00:45:12.035845300Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:45:12.036353 containerd[1607]: time="2026-01-24T00:45:12.035998226Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 24 00:45:12.037287 kubelet[2869]: E0124 00:45:12.036889 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:45:12.037287 kubelet[2869]: E0124 00:45:12.036997 2869 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:45:12.037287 kubelet[2869]: E0124 00:45:12.037079 2869 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5f8f47959d-9fk7m_calico-system(e041bbba-486b-4bf8-b212-ca4fbb2d4a57): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:45:12.038477 kubelet[2869]: E0124 00:45:12.037129 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f8f47959d-9fk7m" podUID="e041bbba-486b-4bf8-b212-ca4fbb2d4a57" Jan 24 00:45:12.071000 audit: BPF prog-id=205 op=LOAD Jan 24 00:45:12.072000 audit: BPF prog-id=206 op=LOAD Jan 24 00:45:12.072000 audit[5003]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000250238 a2=98 a3=0 items=0 ppid=4988 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:12.072000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363353266306466356531306633313163643633313765316666316231 Jan 24 00:45:12.073000 audit: BPF prog-id=206 op=UNLOAD Jan 24 00:45:12.073000 audit[5003]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4988 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:12.073000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363353266306466356531306633313163643633313765316666316231 Jan 24 00:45:12.073000 audit: BPF prog-id=207 op=LOAD Jan 24 00:45:12.073000 audit[5003]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000250488 a2=98 a3=0 items=0 ppid=4988 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:12.073000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363353266306466356531306633313163643633313765316666316231 Jan 24 00:45:12.073000 audit: BPF prog-id=208 op=LOAD Jan 24 00:45:12.073000 audit[5003]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000250218 a2=98 a3=0 items=0 ppid=4988 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:12.073000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363353266306466356531306633313163643633313765316666316231 Jan 24 00:45:12.073000 audit: BPF prog-id=208 op=UNLOAD Jan 24 00:45:12.073000 audit[5003]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4988 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:12.073000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363353266306466356531306633313163643633313765316666316231 Jan 24 00:45:12.073000 audit: BPF prog-id=207 op=UNLOAD Jan 24 00:45:12.073000 audit[5003]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4988 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:12.073000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363353266306466356531306633313163643633313765316666316231 Jan 24 00:45:12.073000 audit: BPF prog-id=209 op=LOAD Jan 24 00:45:12.073000 audit[5003]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002506e8 a2=98 a3=0 items=0 ppid=4988 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:12.073000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363353266306466356531306633313163643633313765316666316231 Jan 24 00:45:12.088030 systemd-resolved[1287]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:45:12.353434 containerd[1607]: time="2026-01-24T00:45:12.353092416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5594cdc7fb-7kk8c,Uid:5a82bd01-5299-411a-9329-279ee1a3e6ef,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"cc52f0df5e10f311cd6317e1ff1b138de1d660c89bb7646c08c75e4d75196b32\"" Jan 24 00:45:12.362461 containerd[1607]: time="2026-01-24T00:45:12.362299868Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:45:12.452317 kubelet[2869]: E0124 00:45:12.451574 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:45:12.478679 kubelet[2869]: E0124 00:45:12.478642 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:45:12.486957 kubelet[2869]: E0124 00:45:12.482716 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5594cdc7fb-c8l7f" podUID="050a17cf-0e04-46c0-ad64-4ce3987ef3d5" Jan 24 00:45:12.486957 kubelet[2869]: E0124 00:45:12.486821 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f8f47959d-9fk7m" podUID="e041bbba-486b-4bf8-b212-ca4fbb2d4a57" Jan 24 00:45:12.488016 containerd[1607]: time="2026-01-24T00:45:12.483808832Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:45:12.491030 containerd[1607]: time="2026-01-24T00:45:12.490919010Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:45:12.492267 containerd[1607]: time="2026-01-24T00:45:12.491104787Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:45:12.494264 kubelet[2869]: E0124 00:45:12.492960 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:45:12.494264 kubelet[2869]: E0124 00:45:12.493004 2869 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:45:12.494264 kubelet[2869]: E0124 00:45:12.493068 2869 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5594cdc7fb-7kk8c_calico-apiserver(5a82bd01-5299-411a-9329-279ee1a3e6ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:45:12.494264 kubelet[2869]: E0124 00:45:12.493109 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5594cdc7fb-7kk8c" podUID="5a82bd01-5299-411a-9329-279ee1a3e6ef" Jan 24 00:45:12.515289 systemd-networkd[1505]: cali0a1a97ad043: Link UP Jan 24 00:45:12.535792 systemd-networkd[1505]: cali0a1a97ad043: Gained carrier Jan 24 00:45:12.640061 kubelet[2869]: I0124 00:45:12.635018 2869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-wp69k" podStartSLOduration=95.634997686 podStartE2EDuration="1m35.634997686s" podCreationTimestamp="2026-01-24 00:43:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:45:12.536454713 +0000 UTC m=+99.255755080" watchObservedRunningTime="2026-01-24 00:45:12.634997686 +0000 UTC m=+99.354297902" Jan 24 00:45:12.646656 containerd[1607]: 2026-01-24 00:45:11.922 [INFO][5006] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:45:12.646656 containerd[1607]: 2026-01-24 00:45:12.045 [INFO][5006] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--j2nlt-eth0 goldmane-7c778bb748- calico-system 0329b08b-e4ed-4b35-88d7-60baae652219 965 0 2026-01-24 00:44:08 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-j2nlt eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali0a1a97ad043 [] [] }} ContainerID="f0221de959d3ba76e309ed9454ac7ce20ad4bcadcfd21a012d00800527918156" Namespace="calico-system" Pod="goldmane-7c778bb748-j2nlt" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--j2nlt-" Jan 24 00:45:12.646656 containerd[1607]: 2026-01-24 00:45:12.048 [INFO][5006] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f0221de959d3ba76e309ed9454ac7ce20ad4bcadcfd21a012d00800527918156" Namespace="calico-system" Pod="goldmane-7c778bb748-j2nlt" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--j2nlt-eth0" Jan 24 00:45:12.646656 containerd[1607]: 2026-01-24 00:45:12.220 [INFO][5042] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f0221de959d3ba76e309ed9454ac7ce20ad4bcadcfd21a012d00800527918156" HandleID="k8s-pod-network.f0221de959d3ba76e309ed9454ac7ce20ad4bcadcfd21a012d00800527918156" Workload="localhost-k8s-goldmane--7c778bb748--j2nlt-eth0" Jan 24 00:45:12.646656 containerd[1607]: 2026-01-24 00:45:12.220 [INFO][5042] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f0221de959d3ba76e309ed9454ac7ce20ad4bcadcfd21a012d00800527918156" HandleID="k8s-pod-network.f0221de959d3ba76e309ed9454ac7ce20ad4bcadcfd21a012d00800527918156" Workload="localhost-k8s-goldmane--7c778bb748--j2nlt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fb70), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-j2nlt", "timestamp":"2026-01-24 00:45:12.220500853 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:45:12.646656 containerd[1607]: 2026-01-24 00:45:12.221 [INFO][5042] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:45:12.646656 containerd[1607]: 2026-01-24 00:45:12.221 [INFO][5042] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:45:12.646656 containerd[1607]: 2026-01-24 00:45:12.221 [INFO][5042] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:45:12.646656 containerd[1607]: 2026-01-24 00:45:12.260 [INFO][5042] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f0221de959d3ba76e309ed9454ac7ce20ad4bcadcfd21a012d00800527918156" host="localhost" Jan 24 00:45:12.646656 containerd[1607]: 2026-01-24 00:45:12.292 [INFO][5042] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:45:12.646656 containerd[1607]: 2026-01-24 00:45:12.322 [INFO][5042] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:45:12.646656 containerd[1607]: 2026-01-24 00:45:12.342 [INFO][5042] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:45:12.646656 containerd[1607]: 2026-01-24 00:45:12.371 [INFO][5042] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:45:12.646656 containerd[1607]: 2026-01-24 00:45:12.376 [INFO][5042] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f0221de959d3ba76e309ed9454ac7ce20ad4bcadcfd21a012d00800527918156" host="localhost" Jan 24 00:45:12.646656 containerd[1607]: 2026-01-24 00:45:12.384 [INFO][5042] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f0221de959d3ba76e309ed9454ac7ce20ad4bcadcfd21a012d00800527918156 Jan 24 00:45:12.646656 containerd[1607]: 2026-01-24 00:45:12.413 [INFO][5042] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f0221de959d3ba76e309ed9454ac7ce20ad4bcadcfd21a012d00800527918156" host="localhost" Jan 24 00:45:12.646656 containerd[1607]: 2026-01-24 00:45:12.453 [INFO][5042] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.f0221de959d3ba76e309ed9454ac7ce20ad4bcadcfd21a012d00800527918156" host="localhost" Jan 24 00:45:12.646656 containerd[1607]: 2026-01-24 00:45:12.455 [INFO][5042] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.f0221de959d3ba76e309ed9454ac7ce20ad4bcadcfd21a012d00800527918156" host="localhost" Jan 24 00:45:12.646656 containerd[1607]: 2026-01-24 00:45:12.465 [INFO][5042] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:45:12.646656 containerd[1607]: 2026-01-24 00:45:12.465 [INFO][5042] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="f0221de959d3ba76e309ed9454ac7ce20ad4bcadcfd21a012d00800527918156" HandleID="k8s-pod-network.f0221de959d3ba76e309ed9454ac7ce20ad4bcadcfd21a012d00800527918156" Workload="localhost-k8s-goldmane--7c778bb748--j2nlt-eth0" Jan 24 00:45:12.648000 audit: BPF prog-id=210 op=LOAD Jan 24 00:45:12.648000 audit[5087]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff98c81360 a2=98 a3=1fffffffffffffff items=0 ppid=4793 pid=5087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:12.648000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 00:45:12.648000 audit: BPF prog-id=210 op=UNLOAD Jan 24 00:45:12.648000 audit[5087]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff98c81330 a3=0 items=0 ppid=4793 pid=5087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:12.648000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 00:45:12.651509 containerd[1607]: 2026-01-24 00:45:12.495 [INFO][5006] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f0221de959d3ba76e309ed9454ac7ce20ad4bcadcfd21a012d00800527918156" Namespace="calico-system" Pod="goldmane-7c778bb748-j2nlt" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--j2nlt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--j2nlt-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"0329b08b-e4ed-4b35-88d7-60baae652219", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 44, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-j2nlt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0a1a97ad043", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:45:12.651509 containerd[1607]: 2026-01-24 00:45:12.495 [INFO][5006] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="f0221de959d3ba76e309ed9454ac7ce20ad4bcadcfd21a012d00800527918156" Namespace="calico-system" Pod="goldmane-7c778bb748-j2nlt" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--j2nlt-eth0" Jan 24 00:45:12.651509 containerd[1607]: 2026-01-24 00:45:12.496 [INFO][5006] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0a1a97ad043 ContainerID="f0221de959d3ba76e309ed9454ac7ce20ad4bcadcfd21a012d00800527918156" Namespace="calico-system" Pod="goldmane-7c778bb748-j2nlt" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--j2nlt-eth0" Jan 24 00:45:12.651509 containerd[1607]: 2026-01-24 00:45:12.525 [INFO][5006] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f0221de959d3ba76e309ed9454ac7ce20ad4bcadcfd21a012d00800527918156" Namespace="calico-system" Pod="goldmane-7c778bb748-j2nlt" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--j2nlt-eth0" Jan 24 00:45:12.651509 containerd[1607]: 2026-01-24 00:45:12.558 [INFO][5006] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f0221de959d3ba76e309ed9454ac7ce20ad4bcadcfd21a012d00800527918156" Namespace="calico-system" Pod="goldmane-7c778bb748-j2nlt" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--j2nlt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--j2nlt-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"0329b08b-e4ed-4b35-88d7-60baae652219", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 44, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f0221de959d3ba76e309ed9454ac7ce20ad4bcadcfd21a012d00800527918156", Pod:"goldmane-7c778bb748-j2nlt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0a1a97ad043", MAC:"d2:f5:3e:05:57:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:45:12.651509 containerd[1607]: 2026-01-24 00:45:12.621 [INFO][5006] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f0221de959d3ba76e309ed9454ac7ce20ad4bcadcfd21a012d00800527918156" Namespace="calico-system" Pod="goldmane-7c778bb748-j2nlt" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--j2nlt-eth0" Jan 24 00:45:12.666919 systemd-networkd[1505]: cali8bf4927ed6c: Gained IPv6LL Jan 24 00:45:12.675000 audit: BPF prog-id=211 op=LOAD Jan 24 00:45:12.675000 audit[5087]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff98c81240 a2=94 a3=3 items=0 ppid=4793 pid=5087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:12.675000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 00:45:12.689000 audit: BPF prog-id=211 op=UNLOAD Jan 24 00:45:12.689000 audit[5087]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff98c81240 a2=94 a3=3 items=0 ppid=4793 pid=5087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:12.689000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 00:45:12.694000 audit: BPF prog-id=212 op=LOAD Jan 24 00:45:12.694000 audit[5087]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff98c81280 a2=94 a3=7fff98c81460 items=0 ppid=4793 pid=5087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:12.694000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 00:45:12.697000 audit: BPF prog-id=212 op=UNLOAD Jan 24 00:45:12.697000 audit[5087]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff98c81280 a2=94 a3=7fff98c81460 items=0 ppid=4793 pid=5087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:12.697000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 00:45:12.739732 kubelet[2869]: E0124 00:45:12.738871 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:45:12.781000 audit: BPF prog-id=213 op=LOAD Jan 24 00:45:12.781000 audit[5098]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdce584310 a2=98 a3=3 items=0 ppid=4793 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:12.781000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:45:12.781000 audit: BPF prog-id=213 op=UNLOAD Jan 24 00:45:12.781000 audit[5098]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffdce5842e0 a3=0 items=0 ppid=4793 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:12.781000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:45:12.782000 audit: BPF prog-id=214 op=LOAD Jan 24 00:45:12.782000 audit[5098]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdce584100 a2=94 a3=54428f items=0 ppid=4793 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:12.782000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:45:12.782000 audit: BPF prog-id=214 op=UNLOAD Jan 24 00:45:12.782000 audit[5098]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffdce584100 a2=94 a3=54428f items=0 ppid=4793 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:12.782000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:45:12.782000 audit: BPF prog-id=215 op=LOAD Jan 24 00:45:12.782000 audit[5098]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdce584130 a2=94 a3=2 items=0 ppid=4793 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:12.782000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:45:12.782000 audit: BPF prog-id=215 op=UNLOAD Jan 24 00:45:12.782000 audit[5098]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffdce584130 a2=0 a3=2 items=0 ppid=4793 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:12.782000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:45:12.844024 kubelet[2869]: I0124 00:45:12.841879 2869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-7zqd4" podStartSLOduration=95.841855127 podStartE2EDuration="1m35.841855127s" podCreationTimestamp="2026-01-24 00:43:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:45:12.734683359 +0000 UTC m=+99.453983646" watchObservedRunningTime="2026-01-24 00:45:12.841855127 +0000 UTC m=+99.561155343" Jan 24 00:45:12.857000 audit[5104]: NETFILTER_CFG table=filter:121 family=2 entries=20 op=nft_register_rule pid=5104 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:45:12.857000 audit[5104]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd0051e800 a2=0 a3=7ffd0051e7ec items=0 ppid=3027 pid=5104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:12.857000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:45:12.860450 containerd[1607]: time="2026-01-24T00:45:12.859664280Z" level=info msg="connecting to shim f0221de959d3ba76e309ed9454ac7ce20ad4bcadcfd21a012d00800527918156" address="unix:///run/containerd/s/0537c78d78f22b921a244451355260682f6efdfd8235cda4ddeef06a6f760a00" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:45:12.864000 audit[5104]: NETFILTER_CFG table=nat:122 family=2 entries=14 op=nft_register_rule pid=5104 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:45:12.864000 audit[5104]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd0051e800 a2=0 a3=0 items=0 ppid=3027 pid=5104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:12.864000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:45:13.060950 systemd[1]: Started cri-containerd-f0221de959d3ba76e309ed9454ac7ce20ad4bcadcfd21a012d00800527918156.scope - libcontainer container f0221de959d3ba76e309ed9454ac7ce20ad4bcadcfd21a012d00800527918156. Jan 24 00:45:13.116000 audit: BPF prog-id=216 op=LOAD Jan 24 00:45:13.117000 audit: BPF prog-id=217 op=LOAD Jan 24 00:45:13.117000 audit[5117]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5105 pid=5117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:13.117000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630323231646539353964336261373665333039656439343534616337 Jan 24 00:45:13.117000 audit: BPF prog-id=217 op=UNLOAD Jan 24 00:45:13.117000 audit[5117]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5105 pid=5117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:13.117000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630323231646539353964336261373665333039656439343534616337 Jan 24 00:45:13.117000 audit: BPF prog-id=218 op=LOAD Jan 24 00:45:13.117000 audit[5117]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5105 pid=5117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:13.117000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630323231646539353964336261373665333039656439343534616337 Jan 24 00:45:13.117000 audit: BPF prog-id=219 op=LOAD Jan 24 00:45:13.117000 audit[5117]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5105 pid=5117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:13.117000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630323231646539353964336261373665333039656439343534616337 Jan 24 00:45:13.117000 audit: BPF prog-id=219 op=UNLOAD Jan 24 00:45:13.117000 audit[5117]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5105 pid=5117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:13.117000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630323231646539353964336261373665333039656439343534616337 Jan 24 00:45:13.117000 audit: BPF prog-id=218 op=UNLOAD Jan 24 00:45:13.117000 audit[5117]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5105 pid=5117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:13.117000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630323231646539353964336261373665333039656439343534616337 Jan 24 00:45:13.117000 audit: BPF prog-id=220 op=LOAD Jan 24 00:45:13.117000 audit[5117]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5105 pid=5117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:13.117000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630323231646539353964336261373665333039656439343534616337 Jan 24 00:45:13.120932 systemd-resolved[1287]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:45:13.245801 containerd[1607]: time="2026-01-24T00:45:13.245758245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-j2nlt,Uid:0329b08b-e4ed-4b35-88d7-60baae652219,Namespace:calico-system,Attempt:0,} returns sandbox id \"f0221de959d3ba76e309ed9454ac7ce20ad4bcadcfd21a012d00800527918156\"" Jan 24 00:45:13.251087 containerd[1607]: time="2026-01-24T00:45:13.250678019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:45:13.319481 containerd[1607]: time="2026-01-24T00:45:13.318588555Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:45:13.338034 containerd[1607]: time="2026-01-24T00:45:13.333997030Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:45:13.338479 containerd[1607]: time="2026-01-24T00:45:13.334959220Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 24 00:45:13.338936 kubelet[2869]: E0124 00:45:13.338820 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:45:13.338936 kubelet[2869]: E0124 00:45:13.338875 2869 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:45:13.339025 kubelet[2869]: E0124 00:45:13.338962 2869 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-j2nlt_calico-system(0329b08b-e4ed-4b35-88d7-60baae652219): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:45:13.339025 kubelet[2869]: E0124 00:45:13.339007 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-j2nlt" podUID="0329b08b-e4ed-4b35-88d7-60baae652219" Jan 24 00:45:13.340000 audit: BPF prog-id=221 op=LOAD Jan 24 00:45:13.340000 audit[5098]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdce583ff0 a2=94 a3=1 items=0 ppid=4793 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:13.340000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:45:13.340000 audit: BPF prog-id=221 op=UNLOAD Jan 24 00:45:13.340000 audit[5098]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffdce583ff0 a2=94 a3=1 items=0 ppid=4793 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:13.340000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:45:13.354000 audit: BPF prog-id=222 op=LOAD Jan 24 00:45:13.354000 audit[5098]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffdce583fe0 a2=94 a3=4 items=0 ppid=4793 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:13.354000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:45:13.354000 audit: BPF prog-id=222 op=UNLOAD Jan 24 00:45:13.354000 audit[5098]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffdce583fe0 a2=0 a3=4 items=0 ppid=4793 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:13.354000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:45:13.355000 audit: BPF prog-id=223 op=LOAD Jan 24 00:45:13.355000 audit[5098]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdce583e40 a2=94 a3=5 items=0 ppid=4793 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:13.355000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:45:13.355000 audit: BPF prog-id=223 op=UNLOAD Jan 24 00:45:13.355000 audit[5098]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffdce583e40 a2=0 a3=5 items=0 ppid=4793 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:13.355000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:45:13.355000 audit: BPF prog-id=224 op=LOAD Jan 24 00:45:13.355000 audit[5098]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffdce584060 a2=94 a3=6 items=0 ppid=4793 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:13.355000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:45:13.355000 audit: BPF prog-id=224 op=UNLOAD Jan 24 00:45:13.355000 audit[5098]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffdce584060 a2=0 a3=6 items=0 ppid=4793 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:13.355000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:45:13.355000 audit: BPF prog-id=225 op=LOAD Jan 24 00:45:13.355000 audit[5098]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffdce583810 a2=94 a3=88 items=0 ppid=4793 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:13.355000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:45:13.356000 audit: BPF prog-id=226 op=LOAD Jan 24 00:45:13.356000 audit[5098]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffdce583690 a2=94 a3=2 items=0 ppid=4793 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:13.356000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:45:13.356000 audit: BPF prog-id=226 op=UNLOAD Jan 24 00:45:13.356000 audit[5098]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffdce5836c0 a2=0 a3=7ffdce5837c0 items=0 ppid=4793 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:13.356000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:45:13.357000 audit: BPF prog-id=225 op=UNLOAD Jan 24 00:45:13.357000 audit[5098]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=33a8ad10 a2=0 a3=7c8640e5fa2503bb items=0 ppid=4793 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:13.357000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:45:13.386000 audit: BPF prog-id=227 op=LOAD Jan 24 00:45:13.386000 audit[5146]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc32b01080 a2=98 a3=1999999999999999 items=0 ppid=4793 pid=5146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:13.386000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 24 00:45:13.387000 audit: BPF prog-id=227 op=UNLOAD Jan 24 00:45:13.387000 audit[5146]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc32b01050 a3=0 items=0 ppid=4793 pid=5146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:13.387000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 24 00:45:13.387000 audit: BPF prog-id=228 op=LOAD Jan 24 00:45:13.387000 audit[5146]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc32b00f60 a2=94 a3=ffff items=0 ppid=4793 pid=5146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:13.387000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 24 00:45:13.387000 audit: BPF prog-id=228 op=UNLOAD Jan 24 00:45:13.387000 audit[5146]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc32b00f60 a2=94 a3=ffff items=0 ppid=4793 pid=5146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:13.387000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 24 00:45:13.387000 audit: BPF prog-id=229 op=LOAD Jan 24 00:45:13.387000 audit[5146]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc32b00fa0 a2=94 a3=7ffc32b01180 items=0 ppid=4793 pid=5146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:13.387000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 24 00:45:13.387000 audit: BPF prog-id=229 op=UNLOAD Jan 24 00:45:13.387000 audit[5146]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc32b00fa0 a2=94 a3=7ffc32b01180 items=0 ppid=4793 pid=5146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:13.387000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 24 00:45:13.497422 kubelet[2869]: E0124 00:45:13.497044 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:45:13.501895 kubelet[2869]: E0124 00:45:13.501331 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:45:13.507978 kubelet[2869]: E0124 00:45:13.506753 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-j2nlt" podUID="0329b08b-e4ed-4b35-88d7-60baae652219" Jan 24 00:45:13.512343 kubelet[2869]: E0124 00:45:13.508774 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5594cdc7fb-7kk8c" podUID="5a82bd01-5299-411a-9329-279ee1a3e6ef" Jan 24 00:45:13.512343 kubelet[2869]: E0124 00:45:13.508960 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f8f47959d-9fk7m" podUID="e041bbba-486b-4bf8-b212-ca4fbb2d4a57" Jan 24 00:45:13.769064 systemd-networkd[1505]: vxlan.calico: Link UP Jan 24 00:45:13.769079 systemd-networkd[1505]: vxlan.calico: Gained carrier Jan 24 00:45:13.882586 systemd-networkd[1505]: cali0a1a97ad043: Gained IPv6LL Jan 24 00:45:13.957000 audit: BPF prog-id=230 op=LOAD Jan 24 00:45:13.957000 audit[5172]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcb311cbf0 a2=98 a3=0 items=0 ppid=4793 pid=5172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:13.957000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:45:13.958000 audit: BPF prog-id=230 op=UNLOAD Jan 24 00:45:13.958000 audit[5172]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffcb311cbc0 a3=0 items=0 ppid=4793 pid=5172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:13.958000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:45:13.958000 audit: BPF prog-id=231 op=LOAD Jan 24 00:45:13.958000 audit[5172]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcb311ca00 a2=94 a3=54428f items=0 ppid=4793 pid=5172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:13.958000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:45:13.958000 audit: BPF prog-id=231 op=UNLOAD Jan 24 00:45:13.958000 audit[5172]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcb311ca00 a2=94 a3=54428f items=0 ppid=4793 pid=5172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:13.958000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:45:13.958000 audit: BPF prog-id=232 op=LOAD Jan 24 00:45:13.958000 audit[5172]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcb311ca30 a2=94 a3=2 items=0 ppid=4793 pid=5172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:13.958000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:45:13.958000 audit: BPF prog-id=232 op=UNLOAD Jan 24 00:45:13.958000 audit[5172]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcb311ca30 a2=0 a3=2 items=0 ppid=4793 pid=5172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:13.958000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:45:13.958000 audit: BPF prog-id=233 op=LOAD Jan 24 00:45:13.958000 audit[5172]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcb311c7e0 a2=94 a3=4 items=0 ppid=4793 pid=5172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:13.958000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:45:13.959000 audit: BPF prog-id=233 op=UNLOAD Jan 24 00:45:13.959000 audit[5172]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffcb311c7e0 a2=94 a3=4 items=0 ppid=4793 pid=5172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:13.959000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:45:13.959000 audit: BPF prog-id=234 op=LOAD Jan 24 00:45:13.959000 audit[5172]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcb311c8e0 a2=94 a3=7ffcb311ca60 items=0 ppid=4793 pid=5172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:13.959000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:45:13.959000 audit: BPF prog-id=234 op=UNLOAD Jan 24 00:45:13.959000 audit[5172]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffcb311c8e0 a2=0 a3=7ffcb311ca60 items=0 ppid=4793 pid=5172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:13.959000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:45:13.960000 audit: BPF prog-id=235 op=LOAD Jan 24 00:45:13.960000 audit[5172]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcb311c010 a2=94 a3=2 items=0 ppid=4793 pid=5172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:13.960000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:45:13.960000 audit: BPF prog-id=235 op=UNLOAD Jan 24 00:45:13.960000 audit[5172]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffcb311c010 a2=0 a3=2 items=0 ppid=4793 pid=5172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:13.960000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:45:13.960000 audit: BPF prog-id=236 op=LOAD Jan 24 00:45:13.960000 audit[5172]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcb311c110 a2=94 a3=30 items=0 ppid=4793 pid=5172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:13.960000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:45:14.008000 audit: BPF prog-id=237 op=LOAD Jan 24 00:45:14.008000 audit[5179]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe4544ebd0 a2=98 a3=0 items=0 ppid=4793 pid=5179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:14.008000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:45:14.008000 audit: BPF prog-id=237 op=UNLOAD Jan 24 00:45:14.008000 audit[5179]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe4544eba0 a3=0 items=0 ppid=4793 pid=5179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:14.008000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:45:14.009000 audit: BPF prog-id=238 op=LOAD Jan 24 00:45:14.009000 audit[5179]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe4544e9c0 a2=94 a3=54428f items=0 ppid=4793 pid=5179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:14.009000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:45:14.009000 audit: BPF prog-id=238 op=UNLOAD Jan 24 00:45:14.009000 audit[5179]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe4544e9c0 a2=94 a3=54428f items=0 ppid=4793 pid=5179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:14.009000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:45:14.009000 audit: BPF prog-id=239 op=LOAD Jan 24 00:45:14.009000 audit[5179]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe4544e9f0 a2=94 a3=2 items=0 ppid=4793 pid=5179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:14.009000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:45:14.009000 audit: BPF prog-id=239 op=UNLOAD Jan 24 00:45:14.009000 audit[5179]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe4544e9f0 a2=0 a3=2 items=0 ppid=4793 pid=5179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:14.009000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:45:14.057000 audit[5184]: NETFILTER_CFG table=filter:123 family=2 entries=17 op=nft_register_rule pid=5184 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:45:14.057000 audit[5184]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff215eb920 a2=0 a3=7fff215eb90c items=0 ppid=3027 pid=5184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:14.057000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:45:14.086000 audit[5184]: NETFILTER_CFG table=nat:124 family=2 entries=47 op=nft_register_chain pid=5184 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:45:14.086000 audit[5184]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fff215eb920 a2=0 a3=7fff215eb90c items=0 ppid=3027 pid=5184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:14.086000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:45:14.406000 audit: BPF prog-id=240 op=LOAD Jan 24 00:45:14.406000 audit[5179]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe4544e8b0 a2=94 a3=1 items=0 ppid=4793 pid=5179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:14.406000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:45:14.407000 audit: BPF prog-id=240 op=UNLOAD Jan 24 00:45:14.407000 audit[5179]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe4544e8b0 a2=94 a3=1 items=0 ppid=4793 pid=5179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:14.407000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:45:14.421000 audit: BPF prog-id=241 op=LOAD Jan 24 00:45:14.421000 audit[5179]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe4544e8a0 a2=94 a3=4 items=0 ppid=4793 pid=5179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:14.421000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:45:14.421000 audit: BPF prog-id=241 op=UNLOAD Jan 24 00:45:14.421000 audit[5179]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe4544e8a0 a2=0 a3=4 items=0 ppid=4793 pid=5179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:14.421000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:45:14.421000 audit: BPF prog-id=242 op=LOAD Jan 24 00:45:14.421000 audit[5179]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe4544e700 a2=94 a3=5 items=0 ppid=4793 pid=5179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:14.421000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:45:14.422000 audit: BPF prog-id=242 op=UNLOAD Jan 24 00:45:14.422000 audit[5179]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe4544e700 a2=0 a3=5 items=0 ppid=4793 pid=5179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:14.422000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:45:14.422000 audit: BPF prog-id=243 op=LOAD Jan 24 00:45:14.422000 audit[5179]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe4544e920 a2=94 a3=6 items=0 ppid=4793 pid=5179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:14.422000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:45:14.422000 audit: BPF prog-id=243 op=UNLOAD Jan 24 00:45:14.422000 audit[5179]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe4544e920 a2=0 a3=6 items=0 ppid=4793 pid=5179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:14.422000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:45:14.422000 audit: BPF prog-id=244 op=LOAD Jan 24 00:45:14.422000 audit[5179]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe4544e0d0 a2=94 a3=88 items=0 ppid=4793 pid=5179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:14.422000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:45:14.428000 audit: BPF prog-id=245 op=LOAD Jan 24 00:45:14.428000 audit[5179]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffe4544df50 a2=94 a3=2 items=0 ppid=4793 pid=5179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:14.428000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:45:14.428000 audit: BPF prog-id=245 op=UNLOAD Jan 24 00:45:14.428000 audit[5179]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffe4544df80 a2=0 a3=7ffe4544e080 items=0 ppid=4793 pid=5179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:14.428000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:45:14.429000 audit: BPF prog-id=244 op=UNLOAD Jan 24 00:45:14.429000 audit[5179]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=13902d10 a2=0 a3=80fe89c739cc09c9 items=0 ppid=4793 pid=5179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:14.429000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:45:14.452000 audit: BPF prog-id=236 op=UNLOAD Jan 24 00:45:14.452000 audit[4793]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000864580 a2=0 a3=0 items=0 ppid=4740 pid=4793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:14.452000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 24 00:45:14.504538 kubelet[2869]: E0124 00:45:14.504081 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:45:14.504901 kubelet[2869]: E0124 00:45:14.504604 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-j2nlt" podUID="0329b08b-e4ed-4b35-88d7-60baae652219" Jan 24 00:45:14.686000 audit[5213]: NETFILTER_CFG table=nat:125 family=2 entries=15 op=nft_register_chain pid=5213 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:45:14.686000 audit[5213]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffe1d8b1a70 a2=0 a3=7ffe1d8b1a5c items=0 ppid=4793 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:14.686000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:45:14.703000 audit[5217]: NETFILTER_CFG table=mangle:126 family=2 entries=16 op=nft_register_chain pid=5217 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:45:14.703000 audit[5217]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffc8d8d28e0 a2=0 a3=7ffc8d8d28cc items=0 ppid=4793 pid=5217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:14.703000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:45:14.705000 audit[5212]: NETFILTER_CFG table=raw:127 family=2 entries=21 op=nft_register_chain pid=5212 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:45:14.705000 audit[5212]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7fff3658e700 a2=0 a3=7fff3658e6ec items=0 ppid=4793 pid=5212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:14.705000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:45:14.711000 audit[5214]: NETFILTER_CFG table=filter:128 family=2 entries=259 op=nft_register_chain pid=5214 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:45:14.711000 audit[5214]: SYSCALL arch=c000003e syscall=46 success=yes exit=154412 a0=3 a1=7ffe93c12270 a2=0 a3=7ffe93c1225c items=0 ppid=4793 pid=5214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:14.711000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:45:14.945385 systemd-networkd[1505]: vxlan.calico: Gained IPv6LL Jan 24 00:45:18.749943 containerd[1607]: time="2026-01-24T00:45:18.749688381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f7d444f9d-54g8g,Uid:f244c052-aa71-4ccd-aaea-117d2939edf5,Namespace:calico-system,Attempt:0,}" Jan 24 00:45:19.173650 systemd-networkd[1505]: calic377a67af7a: Link UP Jan 24 00:45:19.175863 systemd-networkd[1505]: calic377a67af7a: Gained carrier Jan 24 00:45:19.208095 containerd[1607]: 2026-01-24 00:45:18.885 [INFO][5229] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5f7d444f9d--54g8g-eth0 calico-kube-controllers-5f7d444f9d- calico-system f244c052-aa71-4ccd-aaea-117d2939edf5 967 0 2026-01-24 00:44:16 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5f7d444f9d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5f7d444f9d-54g8g eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic377a67af7a [] [] }} ContainerID="fbb85ae87c0d947444787a5fc52820a8e05194a341554646a94b1e295e981239" Namespace="calico-system" Pod="calico-kube-controllers-5f7d444f9d-54g8g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f7d444f9d--54g8g-" Jan 24 00:45:19.208095 containerd[1607]: 2026-01-24 00:45:18.885 [INFO][5229] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fbb85ae87c0d947444787a5fc52820a8e05194a341554646a94b1e295e981239" Namespace="calico-system" Pod="calico-kube-controllers-5f7d444f9d-54g8g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f7d444f9d--54g8g-eth0" Jan 24 00:45:19.208095 containerd[1607]: 2026-01-24 00:45:18.999 [INFO][5239] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fbb85ae87c0d947444787a5fc52820a8e05194a341554646a94b1e295e981239" HandleID="k8s-pod-network.fbb85ae87c0d947444787a5fc52820a8e05194a341554646a94b1e295e981239" Workload="localhost-k8s-calico--kube--controllers--5f7d444f9d--54g8g-eth0" Jan 24 00:45:19.208095 containerd[1607]: 2026-01-24 00:45:19.001 [INFO][5239] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fbb85ae87c0d947444787a5fc52820a8e05194a341554646a94b1e295e981239" HandleID="k8s-pod-network.fbb85ae87c0d947444787a5fc52820a8e05194a341554646a94b1e295e981239" Workload="localhost-k8s-calico--kube--controllers--5f7d444f9d--54g8g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001394e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5f7d444f9d-54g8g", "timestamp":"2026-01-24 00:45:18.999601182 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:45:19.208095 containerd[1607]: 2026-01-24 00:45:19.002 [INFO][5239] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:45:19.208095 containerd[1607]: 2026-01-24 00:45:19.002 [INFO][5239] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:45:19.208095 containerd[1607]: 2026-01-24 00:45:19.002 [INFO][5239] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:45:19.208095 containerd[1607]: 2026-01-24 00:45:19.040 [INFO][5239] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fbb85ae87c0d947444787a5fc52820a8e05194a341554646a94b1e295e981239" host="localhost" Jan 24 00:45:19.208095 containerd[1607]: 2026-01-24 00:45:19.067 [INFO][5239] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:45:19.208095 containerd[1607]: 2026-01-24 00:45:19.083 [INFO][5239] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:45:19.208095 containerd[1607]: 2026-01-24 00:45:19.091 [INFO][5239] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:45:19.208095 containerd[1607]: 2026-01-24 00:45:19.098 [INFO][5239] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:45:19.208095 containerd[1607]: 2026-01-24 00:45:19.098 [INFO][5239] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fbb85ae87c0d947444787a5fc52820a8e05194a341554646a94b1e295e981239" host="localhost" Jan 24 00:45:19.208095 containerd[1607]: 2026-01-24 00:45:19.103 [INFO][5239] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fbb85ae87c0d947444787a5fc52820a8e05194a341554646a94b1e295e981239 Jan 24 00:45:19.208095 containerd[1607]: 2026-01-24 00:45:19.120 [INFO][5239] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fbb85ae87c0d947444787a5fc52820a8e05194a341554646a94b1e295e981239" host="localhost" Jan 24 00:45:19.208095 containerd[1607]: 2026-01-24 00:45:19.153 [INFO][5239] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.fbb85ae87c0d947444787a5fc52820a8e05194a341554646a94b1e295e981239" host="localhost" Jan 24 00:45:19.208095 containerd[1607]: 2026-01-24 00:45:19.154 [INFO][5239] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.fbb85ae87c0d947444787a5fc52820a8e05194a341554646a94b1e295e981239" host="localhost" Jan 24 00:45:19.208095 containerd[1607]: 2026-01-24 00:45:19.154 [INFO][5239] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:45:19.208095 containerd[1607]: 2026-01-24 00:45:19.154 [INFO][5239] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="fbb85ae87c0d947444787a5fc52820a8e05194a341554646a94b1e295e981239" HandleID="k8s-pod-network.fbb85ae87c0d947444787a5fc52820a8e05194a341554646a94b1e295e981239" Workload="localhost-k8s-calico--kube--controllers--5f7d444f9d--54g8g-eth0" Jan 24 00:45:19.209834 containerd[1607]: 2026-01-24 00:45:19.168 [INFO][5229] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fbb85ae87c0d947444787a5fc52820a8e05194a341554646a94b1e295e981239" Namespace="calico-system" Pod="calico-kube-controllers-5f7d444f9d-54g8g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f7d444f9d--54g8g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5f7d444f9d--54g8g-eth0", GenerateName:"calico-kube-controllers-5f7d444f9d-", Namespace:"calico-system", SelfLink:"", UID:"f244c052-aa71-4ccd-aaea-117d2939edf5", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 44, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f7d444f9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5f7d444f9d-54g8g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic377a67af7a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:45:19.209834 containerd[1607]: 2026-01-24 00:45:19.168 [INFO][5229] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="fbb85ae87c0d947444787a5fc52820a8e05194a341554646a94b1e295e981239" Namespace="calico-system" Pod="calico-kube-controllers-5f7d444f9d-54g8g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f7d444f9d--54g8g-eth0" Jan 24 00:45:19.209834 containerd[1607]: 2026-01-24 00:45:19.169 [INFO][5229] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic377a67af7a ContainerID="fbb85ae87c0d947444787a5fc52820a8e05194a341554646a94b1e295e981239" Namespace="calico-system" Pod="calico-kube-controllers-5f7d444f9d-54g8g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f7d444f9d--54g8g-eth0" Jan 24 00:45:19.209834 containerd[1607]: 2026-01-24 00:45:19.180 [INFO][5229] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fbb85ae87c0d947444787a5fc52820a8e05194a341554646a94b1e295e981239" Namespace="calico-system" Pod="calico-kube-controllers-5f7d444f9d-54g8g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f7d444f9d--54g8g-eth0" Jan 24 00:45:19.209834 containerd[1607]: 2026-01-24 00:45:19.181 [INFO][5229] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fbb85ae87c0d947444787a5fc52820a8e05194a341554646a94b1e295e981239" Namespace="calico-system" Pod="calico-kube-controllers-5f7d444f9d-54g8g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f7d444f9d--54g8g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5f7d444f9d--54g8g-eth0", GenerateName:"calico-kube-controllers-5f7d444f9d-", Namespace:"calico-system", SelfLink:"", UID:"f244c052-aa71-4ccd-aaea-117d2939edf5", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 44, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f7d444f9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fbb85ae87c0d947444787a5fc52820a8e05194a341554646a94b1e295e981239", Pod:"calico-kube-controllers-5f7d444f9d-54g8g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic377a67af7a", MAC:"5a:01:58:2b:18:07", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:45:19.209834 containerd[1607]: 2026-01-24 00:45:19.203 [INFO][5229] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fbb85ae87c0d947444787a5fc52820a8e05194a341554646a94b1e295e981239" Namespace="calico-system" Pod="calico-kube-controllers-5f7d444f9d-54g8g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f7d444f9d--54g8g-eth0" Jan 24 00:45:19.258000 audit[5257]: NETFILTER_CFG table=filter:129 family=2 entries=56 op=nft_register_chain pid=5257 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:45:19.267070 kernel: kauditd_printk_skb: 294 callbacks suppressed Jan 24 00:45:19.267403 kernel: audit: type=1325 audit(1769215519.258:720): table=filter:129 family=2 entries=56 op=nft_register_chain pid=5257 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:45:19.258000 audit[5257]: SYSCALL arch=c000003e syscall=46 success=yes exit=25516 a0=3 a1=7ffcdf961d60 a2=0 a3=7ffcdf961d4c items=0 ppid=4793 pid=5257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:19.319567 kernel: audit: type=1300 audit(1769215519.258:720): arch=c000003e syscall=46 success=yes exit=25516 a0=3 a1=7ffcdf961d60 a2=0 a3=7ffcdf961d4c items=0 ppid=4793 pid=5257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:19.258000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:45:19.335332 containerd[1607]: time="2026-01-24T00:45:19.323123273Z" level=info msg="connecting to shim fbb85ae87c0d947444787a5fc52820a8e05194a341554646a94b1e295e981239" address="unix:///run/containerd/s/a1c22382aff5b1401abbef4572d003039dbabd0a0a8c444d39827607694b2f2f" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:45:19.346903 kernel: audit: type=1327 audit(1769215519.258:720): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:45:19.432056 systemd[1]: Started cri-containerd-fbb85ae87c0d947444787a5fc52820a8e05194a341554646a94b1e295e981239.scope - libcontainer container fbb85ae87c0d947444787a5fc52820a8e05194a341554646a94b1e295e981239. Jan 24 00:45:19.486000 audit: BPF prog-id=246 op=LOAD Jan 24 00:45:19.498072 kernel: audit: type=1334 audit(1769215519.486:721): prog-id=246 op=LOAD Jan 24 00:45:19.498306 kernel: audit: type=1334 audit(1769215519.488:722): prog-id=247 op=LOAD Jan 24 00:45:19.488000 audit: BPF prog-id=247 op=LOAD Jan 24 00:45:19.495471 systemd-resolved[1287]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:45:19.488000 audit[5275]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=5265 pid=5275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:19.522353 kernel: audit: type=1300 audit(1769215519.488:722): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=5265 pid=5275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:19.488000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662623835616538376330643934373434343738376135666335323832 Jan 24 00:45:19.551456 kernel: audit: type=1327 audit(1769215519.488:722): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662623835616538376330643934373434343738376135666335323832 Jan 24 00:45:19.488000 audit: BPF prog-id=247 op=UNLOAD Jan 24 00:45:19.488000 audit[5275]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5265 pid=5275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:19.579512 kernel: audit: type=1334 audit(1769215519.488:723): prog-id=247 op=UNLOAD Jan 24 00:45:19.579596 kernel: audit: type=1300 audit(1769215519.488:723): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5265 pid=5275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:19.488000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662623835616538376330643934373434343738376135666335323832 Jan 24 00:45:19.488000 audit: BPF prog-id=248 op=LOAD Jan 24 00:45:19.488000 audit[5275]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=5265 pid=5275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:19.599458 kernel: audit: type=1327 audit(1769215519.488:723): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662623835616538376330643934373434343738376135666335323832 Jan 24 00:45:19.488000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662623835616538376330643934373434343738376135666335323832 Jan 24 00:45:19.488000 audit: BPF prog-id=249 op=LOAD Jan 24 00:45:19.488000 audit[5275]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=5265 pid=5275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:19.488000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662623835616538376330643934373434343738376135666335323832 Jan 24 00:45:19.488000 audit: BPF prog-id=249 op=UNLOAD Jan 24 00:45:19.488000 audit[5275]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5265 pid=5275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:19.488000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662623835616538376330643934373434343738376135666335323832 Jan 24 00:45:19.488000 audit: BPF prog-id=248 op=UNLOAD Jan 24 00:45:19.488000 audit[5275]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5265 pid=5275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:19.488000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662623835616538376330643934373434343738376135666335323832 Jan 24 00:45:19.488000 audit: BPF prog-id=250 op=LOAD Jan 24 00:45:19.488000 audit[5275]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=5265 pid=5275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:19.488000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662623835616538376330643934373434343738376135666335323832 Jan 24 00:45:19.610949 containerd[1607]: time="2026-01-24T00:45:19.610842453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f7d444f9d-54g8g,Uid:f244c052-aa71-4ccd-aaea-117d2939edf5,Namespace:calico-system,Attempt:0,} returns sandbox id \"fbb85ae87c0d947444787a5fc52820a8e05194a341554646a94b1e295e981239\"" Jan 24 00:45:19.615063 containerd[1607]: time="2026-01-24T00:45:19.615033314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:45:19.869799 containerd[1607]: time="2026-01-24T00:45:19.869664300Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:45:19.875651 containerd[1607]: time="2026-01-24T00:45:19.875482168Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:45:19.876097 containerd[1607]: time="2026-01-24T00:45:19.875607282Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 24 00:45:19.877533 kubelet[2869]: E0124 00:45:19.876975 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:45:19.877533 kubelet[2869]: E0124 00:45:19.877100 2869 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:45:19.878529 kubelet[2869]: E0124 00:45:19.877812 2869 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5f7d444f9d-54g8g_calico-system(f244c052-aa71-4ccd-aaea-117d2939edf5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:45:19.878529 kubelet[2869]: E0124 00:45:19.877860 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f7d444f9d-54g8g" podUID="f244c052-aa71-4ccd-aaea-117d2939edf5" Jan 24 00:45:20.065316 update_engine[1586]: I20260124 00:45:20.063676 1586 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 24 00:45:20.065316 update_engine[1586]: I20260124 00:45:20.064679 1586 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 24 00:45:20.069671 update_engine[1586]: I20260124 00:45:20.069499 1586 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 24 00:45:20.070873 update_engine[1586]: I20260124 00:45:20.070765 1586 omaha_request_params.cc:62] Current group set to alpha Jan 24 00:45:20.071394 update_engine[1586]: I20260124 00:45:20.071070 1586 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 24 00:45:20.071394 update_engine[1586]: I20260124 00:45:20.071094 1586 update_attempter.cc:643] Scheduling an action processor start. Jan 24 00:45:20.071394 update_engine[1586]: I20260124 00:45:20.071118 1586 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 24 00:45:20.071748 update_engine[1586]: I20260124 00:45:20.071575 1586 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 24 00:45:20.071863 update_engine[1586]: I20260124 00:45:20.071785 1586 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 24 00:45:20.072023 update_engine[1586]: I20260124 00:45:20.071854 1586 omaha_request_action.cc:272] Request: Jan 24 00:45:20.072023 update_engine[1586]: Jan 24 00:45:20.072023 update_engine[1586]: Jan 24 00:45:20.072023 update_engine[1586]: Jan 24 00:45:20.072023 update_engine[1586]: Jan 24 00:45:20.072023 update_engine[1586]: Jan 24 00:45:20.072023 update_engine[1586]: Jan 24 00:45:20.072023 update_engine[1586]: Jan 24 00:45:20.072023 update_engine[1586]: Jan 24 00:45:20.072023 update_engine[1586]: I20260124 00:45:20.071870 1586 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 24 00:45:20.098425 update_engine[1586]: I20260124 00:45:20.094090 1586 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 24 00:45:20.099443 update_engine[1586]: I20260124 00:45:20.098595 1586 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 24 00:45:20.106847 locksmithd[1650]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 24 00:45:20.113936 update_engine[1586]: E20260124 00:45:20.111855 1586 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 24 00:45:20.113936 update_engine[1586]: I20260124 00:45:20.112725 1586 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 24 00:45:20.543485 systemd-networkd[1505]: calic377a67af7a: Gained IPv6LL Jan 24 00:45:20.570405 kubelet[2869]: E0124 00:45:20.570355 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f7d444f9d-54g8g" podUID="f244c052-aa71-4ccd-aaea-117d2939edf5" Jan 24 00:45:20.746549 containerd[1607]: time="2026-01-24T00:45:20.746402735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-48xkv,Uid:985a1218-3c37-4f6d-aa83-5ce6fdad91a9,Namespace:calico-system,Attempt:0,}" Jan 24 00:45:21.140935 systemd-networkd[1505]: calid286494bd89: Link UP Jan 24 00:45:21.143472 systemd-networkd[1505]: calid286494bd89: Gained carrier Jan 24 00:45:21.186527 containerd[1607]: 2026-01-24 00:45:20.890 [INFO][5308] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--48xkv-eth0 csi-node-driver- calico-system 985a1218-3c37-4f6d-aa83-5ce6fdad91a9 817 0 2026-01-24 00:44:15 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-48xkv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid286494bd89 [] [] }} ContainerID="80606101668ca823ab3dd968803fc46e5cac1270dc89a685ea402469379939da" Namespace="calico-system" Pod="csi-node-driver-48xkv" WorkloadEndpoint="localhost-k8s-csi--node--driver--48xkv-" Jan 24 00:45:21.186527 containerd[1607]: 2026-01-24 00:45:20.890 [INFO][5308] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="80606101668ca823ab3dd968803fc46e5cac1270dc89a685ea402469379939da" Namespace="calico-system" Pod="csi-node-driver-48xkv" WorkloadEndpoint="localhost-k8s-csi--node--driver--48xkv-eth0" Jan 24 00:45:21.186527 containerd[1607]: 2026-01-24 00:45:20.980 [INFO][5323] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="80606101668ca823ab3dd968803fc46e5cac1270dc89a685ea402469379939da" HandleID="k8s-pod-network.80606101668ca823ab3dd968803fc46e5cac1270dc89a685ea402469379939da" Workload="localhost-k8s-csi--node--driver--48xkv-eth0" Jan 24 00:45:21.186527 containerd[1607]: 2026-01-24 00:45:20.980 [INFO][5323] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="80606101668ca823ab3dd968803fc46e5cac1270dc89a685ea402469379939da" HandleID="k8s-pod-network.80606101668ca823ab3dd968803fc46e5cac1270dc89a685ea402469379939da" Workload="localhost-k8s-csi--node--driver--48xkv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000231800), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-48xkv", "timestamp":"2026-01-24 00:45:20.980550595 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:45:21.186527 containerd[1607]: 2026-01-24 00:45:20.980 [INFO][5323] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:45:21.186527 containerd[1607]: 2026-01-24 00:45:20.980 [INFO][5323] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:45:21.186527 containerd[1607]: 2026-01-24 00:45:20.980 [INFO][5323] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:45:21.186527 containerd[1607]: 2026-01-24 00:45:21.013 [INFO][5323] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.80606101668ca823ab3dd968803fc46e5cac1270dc89a685ea402469379939da" host="localhost" Jan 24 00:45:21.186527 containerd[1607]: 2026-01-24 00:45:21.040 [INFO][5323] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:45:21.186527 containerd[1607]: 2026-01-24 00:45:21.055 [INFO][5323] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:45:21.186527 containerd[1607]: 2026-01-24 00:45:21.060 [INFO][5323] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:45:21.186527 containerd[1607]: 2026-01-24 00:45:21.070 [INFO][5323] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:45:21.186527 containerd[1607]: 2026-01-24 00:45:21.071 [INFO][5323] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.80606101668ca823ab3dd968803fc46e5cac1270dc89a685ea402469379939da" host="localhost" Jan 24 00:45:21.186527 containerd[1607]: 2026-01-24 00:45:21.082 [INFO][5323] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.80606101668ca823ab3dd968803fc46e5cac1270dc89a685ea402469379939da Jan 24 00:45:21.186527 containerd[1607]: 2026-01-24 00:45:21.102 [INFO][5323] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.80606101668ca823ab3dd968803fc46e5cac1270dc89a685ea402469379939da" host="localhost" Jan 24 00:45:21.186527 containerd[1607]: 2026-01-24 00:45:21.126 [INFO][5323] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.80606101668ca823ab3dd968803fc46e5cac1270dc89a685ea402469379939da" host="localhost" Jan 24 00:45:21.186527 containerd[1607]: 2026-01-24 00:45:21.126 [INFO][5323] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.80606101668ca823ab3dd968803fc46e5cac1270dc89a685ea402469379939da" host="localhost" Jan 24 00:45:21.186527 containerd[1607]: 2026-01-24 00:45:21.128 [INFO][5323] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:45:21.186527 containerd[1607]: 2026-01-24 00:45:21.128 [INFO][5323] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="80606101668ca823ab3dd968803fc46e5cac1270dc89a685ea402469379939da" HandleID="k8s-pod-network.80606101668ca823ab3dd968803fc46e5cac1270dc89a685ea402469379939da" Workload="localhost-k8s-csi--node--driver--48xkv-eth0" Jan 24 00:45:21.192672 containerd[1607]: 2026-01-24 00:45:21.135 [INFO][5308] cni-plugin/k8s.go 418: Populated endpoint ContainerID="80606101668ca823ab3dd968803fc46e5cac1270dc89a685ea402469379939da" Namespace="calico-system" Pod="csi-node-driver-48xkv" WorkloadEndpoint="localhost-k8s-csi--node--driver--48xkv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--48xkv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"985a1218-3c37-4f6d-aa83-5ce6fdad91a9", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 44, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-48xkv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid286494bd89", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:45:21.192672 containerd[1607]: 2026-01-24 00:45:21.135 [INFO][5308] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="80606101668ca823ab3dd968803fc46e5cac1270dc89a685ea402469379939da" Namespace="calico-system" Pod="csi-node-driver-48xkv" WorkloadEndpoint="localhost-k8s-csi--node--driver--48xkv-eth0" Jan 24 00:45:21.192672 containerd[1607]: 2026-01-24 00:45:21.135 [INFO][5308] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid286494bd89 ContainerID="80606101668ca823ab3dd968803fc46e5cac1270dc89a685ea402469379939da" Namespace="calico-system" Pod="csi-node-driver-48xkv" WorkloadEndpoint="localhost-k8s-csi--node--driver--48xkv-eth0" Jan 24 00:45:21.192672 containerd[1607]: 2026-01-24 00:45:21.145 [INFO][5308] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="80606101668ca823ab3dd968803fc46e5cac1270dc89a685ea402469379939da" Namespace="calico-system" Pod="csi-node-driver-48xkv" WorkloadEndpoint="localhost-k8s-csi--node--driver--48xkv-eth0" Jan 24 00:45:21.192672 containerd[1607]: 2026-01-24 00:45:21.149 [INFO][5308] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="80606101668ca823ab3dd968803fc46e5cac1270dc89a685ea402469379939da" Namespace="calico-system" Pod="csi-node-driver-48xkv" WorkloadEndpoint="localhost-k8s-csi--node--driver--48xkv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--48xkv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"985a1218-3c37-4f6d-aa83-5ce6fdad91a9", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 44, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"80606101668ca823ab3dd968803fc46e5cac1270dc89a685ea402469379939da", Pod:"csi-node-driver-48xkv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid286494bd89", MAC:"62:d6:c9:b3:9a:e3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:45:21.192672 containerd[1607]: 2026-01-24 00:45:21.176 [INFO][5308] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="80606101668ca823ab3dd968803fc46e5cac1270dc89a685ea402469379939da" Namespace="calico-system" Pod="csi-node-driver-48xkv" WorkloadEndpoint="localhost-k8s-csi--node--driver--48xkv-eth0" Jan 24 00:45:21.220000 audit[5339]: NETFILTER_CFG table=filter:130 family=2 entries=60 op=nft_register_chain pid=5339 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:45:21.220000 audit[5339]: SYSCALL arch=c000003e syscall=46 success=yes exit=26704 a0=3 a1=7fffe7bf27f0 a2=0 a3=7fffe7bf27dc items=0 ppid=4793 pid=5339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:21.220000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:45:21.247546 containerd[1607]: time="2026-01-24T00:45:21.247113657Z" level=info msg="connecting to shim 80606101668ca823ab3dd968803fc46e5cac1270dc89a685ea402469379939da" address="unix:///run/containerd/s/b6222062b504143c482c5056916fab1c2d50edd53de09b75abcbbb44db2c866c" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:45:21.349795 systemd[1]: Started cri-containerd-80606101668ca823ab3dd968803fc46e5cac1270dc89a685ea402469379939da.scope - libcontainer container 80606101668ca823ab3dd968803fc46e5cac1270dc89a685ea402469379939da. Jan 24 00:45:21.392000 audit: BPF prog-id=251 op=LOAD Jan 24 00:45:21.394000 audit: BPF prog-id=252 op=LOAD Jan 24 00:45:21.394000 audit[5359]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=5348 pid=5359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:21.394000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830363036313031363638636138323361623364643936383830336663 Jan 24 00:45:21.396000 audit: BPF prog-id=252 op=UNLOAD Jan 24 00:45:21.396000 audit[5359]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5348 pid=5359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:21.396000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830363036313031363638636138323361623364643936383830336663 Jan 24 00:45:21.396000 audit: BPF prog-id=253 op=LOAD Jan 24 00:45:21.396000 audit[5359]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=5348 pid=5359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:21.396000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830363036313031363638636138323361623364643936383830336663 Jan 24 00:45:21.396000 audit: BPF prog-id=254 op=LOAD Jan 24 00:45:21.396000 audit[5359]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=5348 pid=5359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:21.396000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830363036313031363638636138323361623364643936383830336663 Jan 24 00:45:21.397000 audit: BPF prog-id=254 op=UNLOAD Jan 24 00:45:21.397000 audit[5359]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5348 pid=5359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:21.397000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830363036313031363638636138323361623364643936383830336663 Jan 24 00:45:21.397000 audit: BPF prog-id=253 op=UNLOAD Jan 24 00:45:21.397000 audit[5359]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5348 pid=5359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:21.397000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830363036313031363638636138323361623364643936383830336663 Jan 24 00:45:21.397000 audit: BPF prog-id=255 op=LOAD Jan 24 00:45:21.397000 audit[5359]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=5348 pid=5359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:21.397000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830363036313031363638636138323361623364643936383830336663 Jan 24 00:45:21.401408 systemd-resolved[1287]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:45:21.490769 containerd[1607]: time="2026-01-24T00:45:21.490657175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-48xkv,Uid:985a1218-3c37-4f6d-aa83-5ce6fdad91a9,Namespace:calico-system,Attempt:0,} returns sandbox id \"80606101668ca823ab3dd968803fc46e5cac1270dc89a685ea402469379939da\"" Jan 24 00:45:21.495971 containerd[1607]: time="2026-01-24T00:45:21.494972300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:45:21.594150 kubelet[2869]: E0124 00:45:21.594039 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f7d444f9d-54g8g" podUID="f244c052-aa71-4ccd-aaea-117d2939edf5" Jan 24 00:45:21.652570 containerd[1607]: time="2026-01-24T00:45:21.652388586Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:45:21.671670 containerd[1607]: time="2026-01-24T00:45:21.665546290Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:45:21.673710 kubelet[2869]: E0124 00:45:21.673653 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:45:21.674091 kubelet[2869]: E0124 00:45:21.674056 2869 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:45:21.674857 containerd[1607]: time="2026-01-24T00:45:21.666805962Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 24 00:45:21.674951 kubelet[2869]: E0124 00:45:21.674684 2869 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-48xkv_calico-system(985a1218-3c37-4f6d-aa83-5ce6fdad91a9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:45:21.688463 containerd[1607]: time="2026-01-24T00:45:21.687382870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:45:21.775501 containerd[1607]: time="2026-01-24T00:45:21.775449566Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:45:21.781196 containerd[1607]: time="2026-01-24T00:45:21.781039390Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:45:21.781642 containerd[1607]: time="2026-01-24T00:45:21.781342324Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 24 00:45:21.785043 kubelet[2869]: E0124 00:45:21.781588 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:45:21.785043 kubelet[2869]: E0124 00:45:21.784376 2869 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:45:21.785043 kubelet[2869]: E0124 00:45:21.784592 2869 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-48xkv_calico-system(985a1218-3c37-4f6d-aa83-5ce6fdad91a9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:45:21.785043 kubelet[2869]: E0124 00:45:21.784644 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-48xkv" podUID="985a1218-3c37-4f6d-aa83-5ce6fdad91a9" Jan 24 00:45:22.600478 kubelet[2869]: E0124 00:45:22.600407 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-48xkv" podUID="985a1218-3c37-4f6d-aa83-5ce6fdad91a9" Jan 24 00:45:22.842690 systemd-networkd[1505]: calid286494bd89: Gained IPv6LL Jan 24 00:45:23.755434 containerd[1607]: time="2026-01-24T00:45:23.752751892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:45:23.831627 containerd[1607]: time="2026-01-24T00:45:23.831374627Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:45:23.843568 containerd[1607]: time="2026-01-24T00:45:23.840651256Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:45:23.843568 containerd[1607]: time="2026-01-24T00:45:23.840818047Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:45:23.850612 kubelet[2869]: E0124 00:45:23.841432 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:45:23.850612 kubelet[2869]: E0124 00:45:23.841486 2869 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:45:23.850612 kubelet[2869]: E0124 00:45:23.841576 2869 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5594cdc7fb-c8l7f_calico-apiserver(050a17cf-0e04-46c0-ad64-4ce3987ef3d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:45:23.850612 kubelet[2869]: E0124 00:45:23.841624 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5594cdc7fb-c8l7f" podUID="050a17cf-0e04-46c0-ad64-4ce3987ef3d5" Jan 24 00:45:24.251686 systemd[1]: Started sshd@7-10.0.0.71:22-10.0.0.1:43554.service - OpenSSH per-connection server daemon (10.0.0.1:43554). Jan 24 00:45:24.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.71:22-10.0.0.1:43554 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:45:24.526913 sshd[5384]: Accepted publickey for core from 10.0.0.1 port 43554 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:45:24.525000 audit[5384]: USER_ACCT pid=5384 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:24.556888 kernel: kauditd_printk_skb: 41 callbacks suppressed Jan 24 00:45:24.557033 kernel: audit: type=1101 audit(1769215524.525:739): pid=5384 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:24.557000 audit[5384]: CRED_ACQ pid=5384 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:24.560745 sshd-session[5384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:45:24.580372 kernel: audit: type=1103 audit(1769215524.557:740): pid=5384 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:24.584459 systemd-logind[1585]: New session 9 of user core. Jan 24 00:45:24.557000 audit[5384]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd15a4e10 a2=3 a3=0 items=0 ppid=1 pid=5384 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:24.597360 kernel: audit: type=1006 audit(1769215524.557:741): pid=5384 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 24 00:45:24.597407 kernel: audit: type=1300 audit(1769215524.557:741): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd15a4e10 a2=3 a3=0 items=0 ppid=1 pid=5384 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:24.557000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:45:24.637748 kernel: audit: type=1327 audit(1769215524.557:741): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:45:24.643699 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 24 00:45:24.649000 audit[5384]: USER_START pid=5384 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:24.682670 kernel: audit: type=1105 audit(1769215524.649:742): pid=5384 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:24.653000 audit[5391]: CRED_ACQ pid=5391 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:24.704434 kernel: audit: type=1103 audit(1769215524.653:743): pid=5391 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:24.937469 sshd[5391]: Connection closed by 10.0.0.1 port 43554 Jan 24 00:45:24.937939 sshd-session[5384]: pam_unix(sshd:session): session closed for user core Jan 24 00:45:24.949000 audit[5384]: USER_END pid=5384 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:24.958746 systemd[1]: sshd@7-10.0.0.71:22-10.0.0.1:43554.service: Deactivated successfully. Jan 24 00:45:24.963421 systemd[1]: session-9.scope: Deactivated successfully. Jan 24 00:45:24.966381 systemd-logind[1585]: Session 9 logged out. Waiting for processes to exit. Jan 24 00:45:24.970120 systemd-logind[1585]: Removed session 9. Jan 24 00:45:24.978559 kernel: audit: type=1106 audit(1769215524.949:744): pid=5384 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:24.978704 kernel: audit: type=1104 audit(1769215524.950:745): pid=5384 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:24.950000 audit[5384]: CRED_DISP pid=5384 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:25.000464 kernel: audit: type=1131 audit(1769215524.958:746): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.71:22-10.0.0.1:43554 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:45:24.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.71:22-10.0.0.1:43554 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:45:26.747797 containerd[1607]: time="2026-01-24T00:45:26.747430864Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:45:26.883438 containerd[1607]: time="2026-01-24T00:45:26.881635757Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:45:26.888004 containerd[1607]: time="2026-01-24T00:45:26.887862394Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:45:26.888004 containerd[1607]: time="2026-01-24T00:45:26.887992045Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 24 00:45:26.888891 kubelet[2869]: E0124 00:45:26.888499 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:45:26.888891 kubelet[2869]: E0124 00:45:26.888554 2869 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:45:26.888891 kubelet[2869]: E0124 00:45:26.888632 2869 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5f8f47959d-9fk7m_calico-system(e041bbba-486b-4bf8-b212-ca4fbb2d4a57): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:45:26.892054 containerd[1607]: time="2026-01-24T00:45:26.891807656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:45:27.053920 containerd[1607]: time="2026-01-24T00:45:27.053695104Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:45:27.063835 containerd[1607]: time="2026-01-24T00:45:27.063694232Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:45:27.063835 containerd[1607]: time="2026-01-24T00:45:27.063817252Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 24 00:45:27.067820 kubelet[2869]: E0124 00:45:27.064746 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:45:27.067820 kubelet[2869]: E0124 00:45:27.064798 2869 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:45:27.067820 kubelet[2869]: E0124 00:45:27.064878 2869 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5f8f47959d-9fk7m_calico-system(e041bbba-486b-4bf8-b212-ca4fbb2d4a57): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:45:27.067820 kubelet[2869]: E0124 00:45:27.064927 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f8f47959d-9fk7m" podUID="e041bbba-486b-4bf8-b212-ca4fbb2d4a57" Jan 24 00:45:27.753636 containerd[1607]: time="2026-01-24T00:45:27.753589213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:45:27.836669 containerd[1607]: time="2026-01-24T00:45:27.836535614Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:45:27.839729 containerd[1607]: time="2026-01-24T00:45:27.839442679Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:45:27.839729 containerd[1607]: time="2026-01-24T00:45:27.839580169Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:45:27.839959 kubelet[2869]: E0124 00:45:27.839717 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:45:27.839959 kubelet[2869]: E0124 00:45:27.839765 2869 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:45:27.839959 kubelet[2869]: E0124 00:45:27.839845 2869 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5594cdc7fb-7kk8c_calico-apiserver(5a82bd01-5299-411a-9329-279ee1a3e6ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:45:27.839959 kubelet[2869]: E0124 00:45:27.839894 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5594cdc7fb-7kk8c" podUID="5a82bd01-5299-411a-9329-279ee1a3e6ef" Jan 24 00:45:28.753923 containerd[1607]: time="2026-01-24T00:45:28.753849973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:45:28.834657 containerd[1607]: time="2026-01-24T00:45:28.833097280Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:45:28.859545 containerd[1607]: time="2026-01-24T00:45:28.859432976Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:45:28.859545 containerd[1607]: time="2026-01-24T00:45:28.859535758Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 24 00:45:28.859792 kubelet[2869]: E0124 00:45:28.859688 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:45:28.859792 kubelet[2869]: E0124 00:45:28.859738 2869 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:45:28.861368 kubelet[2869]: E0124 00:45:28.860012 2869 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-j2nlt_calico-system(0329b08b-e4ed-4b35-88d7-60baae652219): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:45:28.861368 kubelet[2869]: E0124 00:45:28.860059 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-j2nlt" podUID="0329b08b-e4ed-4b35-88d7-60baae652219" Jan 24 00:45:29.967244 systemd[1]: Started sshd@8-10.0.0.71:22-10.0.0.1:43566.service - OpenSSH per-connection server daemon (10.0.0.1:43566). Jan 24 00:45:29.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.71:22-10.0.0.1:43566 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:45:29.986417 update_engine[1586]: I20260124 00:45:29.986339 1586 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 24 00:45:29.987067 update_engine[1586]: I20260124 00:45:29.986955 1586 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 24 00:45:29.988825 update_engine[1586]: I20260124 00:45:29.988554 1586 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 24 00:45:29.994374 kernel: audit: type=1130 audit(1769215529.966:747): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.71:22-10.0.0.1:43566 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:45:30.005362 update_engine[1586]: E20260124 00:45:30.004971 1586 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 24 00:45:30.005712 update_engine[1586]: I20260124 00:45:30.005484 1586 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 24 00:45:30.159000 audit[5419]: USER_ACCT pid=5419 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:30.166509 sshd[5419]: Accepted publickey for core from 10.0.0.1 port 43566 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:45:30.167419 sshd-session[5419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:45:30.187699 kernel: audit: type=1101 audit(1769215530.159:748): pid=5419 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:30.187851 kernel: audit: type=1103 audit(1769215530.163:749): pid=5419 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:30.163000 audit[5419]: CRED_ACQ pid=5419 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:30.202565 systemd-logind[1585]: New session 10 of user core. Jan 24 00:45:30.215633 kernel: audit: type=1006 audit(1769215530.163:750): pid=5419 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 24 00:45:30.163000 audit[5419]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc54ad6250 a2=3 a3=0 items=0 ppid=1 pid=5419 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:30.243471 kernel: audit: type=1300 audit(1769215530.163:750): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc54ad6250 a2=3 a3=0 items=0 ppid=1 pid=5419 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:30.163000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:45:30.267357 kernel: audit: type=1327 audit(1769215530.163:750): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:45:30.270459 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 24 00:45:30.282000 audit[5419]: USER_START pid=5419 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:30.282000 audit[5423]: CRED_ACQ pid=5423 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:30.351396 kernel: audit: type=1105 audit(1769215530.282:751): pid=5419 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:30.351515 kernel: audit: type=1103 audit(1769215530.282:752): pid=5423 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:30.640917 sshd[5423]: Connection closed by 10.0.0.1 port 43566 Jan 24 00:45:30.639694 sshd-session[5419]: pam_unix(sshd:session): session closed for user core Jan 24 00:45:30.651000 audit[5419]: USER_END pid=5419 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:30.663024 systemd[1]: sshd@8-10.0.0.71:22-10.0.0.1:43566.service: Deactivated successfully. Jan 24 00:45:30.668921 systemd[1]: session-10.scope: Deactivated successfully. Jan 24 00:45:30.676023 systemd-logind[1585]: Session 10 logged out. Waiting for processes to exit. Jan 24 00:45:30.677883 systemd-logind[1585]: Removed session 10. Jan 24 00:45:30.693632 kernel: audit: type=1106 audit(1769215530.651:753): pid=5419 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:30.693767 kernel: audit: type=1104 audit(1769215530.651:754): pid=5419 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:30.651000 audit[5419]: CRED_DISP pid=5419 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:30.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.71:22-10.0.0.1:43566 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:45:34.745529 containerd[1607]: time="2026-01-24T00:45:34.744395891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:45:34.841913 containerd[1607]: time="2026-01-24T00:45:34.841740138Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:45:34.851265 containerd[1607]: time="2026-01-24T00:45:34.850733501Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 24 00:45:34.851265 containerd[1607]: time="2026-01-24T00:45:34.851247890Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:45:34.854236 kubelet[2869]: E0124 00:45:34.853805 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:45:34.854982 kubelet[2869]: E0124 00:45:34.854561 2869 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:45:34.854982 kubelet[2869]: E0124 00:45:34.854678 2869 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-48xkv_calico-system(985a1218-3c37-4f6d-aa83-5ce6fdad91a9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:45:34.862981 containerd[1607]: time="2026-01-24T00:45:34.862583814Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:45:34.939381 containerd[1607]: time="2026-01-24T00:45:34.938659439Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:45:34.943113 containerd[1607]: time="2026-01-24T00:45:34.942045015Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:45:34.943113 containerd[1607]: time="2026-01-24T00:45:34.942407107Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 24 00:45:34.944961 kubelet[2869]: E0124 00:45:34.942571 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:45:34.944961 kubelet[2869]: E0124 00:45:34.942787 2869 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:45:34.944961 kubelet[2869]: E0124 00:45:34.942877 2869 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-48xkv_calico-system(985a1218-3c37-4f6d-aa83-5ce6fdad91a9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:45:34.944961 kubelet[2869]: E0124 00:45:34.942935 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-48xkv" podUID="985a1218-3c37-4f6d-aa83-5ce6fdad91a9" Jan 24 00:45:35.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.71:22-10.0.0.1:56114 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:45:35.661768 systemd[1]: Started sshd@9-10.0.0.71:22-10.0.0.1:56114.service - OpenSSH per-connection server daemon (10.0.0.1:56114). Jan 24 00:45:35.670104 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:45:35.670423 kernel: audit: type=1130 audit(1769215535.660:756): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.71:22-10.0.0.1:56114 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:45:35.751073 containerd[1607]: time="2026-01-24T00:45:35.750121870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:45:35.851417 containerd[1607]: time="2026-01-24T00:45:35.850883371Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:45:35.861716 containerd[1607]: time="2026-01-24T00:45:35.859022352Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:45:35.861716 containerd[1607]: time="2026-01-24T00:45:35.859121087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 24 00:45:35.868271 kubelet[2869]: E0124 00:45:35.864591 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:45:35.868271 kubelet[2869]: E0124 00:45:35.865688 2869 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:45:35.868271 kubelet[2869]: E0124 00:45:35.866068 2869 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5f7d444f9d-54g8g_calico-system(f244c052-aa71-4ccd-aaea-117d2939edf5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:45:35.868271 kubelet[2869]: E0124 00:45:35.866118 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f7d444f9d-54g8g" podUID="f244c052-aa71-4ccd-aaea-117d2939edf5" Jan 24 00:45:35.875000 audit[5447]: USER_ACCT pid=5447 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:35.878519 sshd[5447]: Accepted publickey for core from 10.0.0.1 port 56114 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:45:35.885867 sshd-session[5447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:45:35.912296 systemd-logind[1585]: New session 11 of user core. Jan 24 00:45:35.934669 kernel: audit: type=1101 audit(1769215535.875:757): pid=5447 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:35.877000 audit[5447]: CRED_ACQ pid=5447 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:36.003316 kernel: audit: type=1103 audit(1769215535.877:758): pid=5447 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:36.004103 kernel: audit: type=1006 audit(1769215535.877:759): pid=5447 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 24 00:45:35.996923 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 24 00:45:35.877000 audit[5447]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe50389590 a2=3 a3=0 items=0 ppid=1 pid=5447 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:36.036290 kernel: audit: type=1300 audit(1769215535.877:759): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe50389590 a2=3 a3=0 items=0 ppid=1 pid=5447 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:35.877000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:45:36.025000 audit[5447]: USER_START pid=5447 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:36.078503 kernel: audit: type=1327 audit(1769215535.877:759): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:45:36.078614 kernel: audit: type=1105 audit(1769215536.025:760): pid=5447 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:36.034000 audit[5451]: CRED_ACQ pid=5451 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:36.110667 kernel: audit: type=1103 audit(1769215536.034:761): pid=5451 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:36.323509 sshd[5451]: Connection closed by 10.0.0.1 port 56114 Jan 24 00:45:36.323283 sshd-session[5447]: pam_unix(sshd:session): session closed for user core Jan 24 00:45:36.327000 audit[5447]: USER_END pid=5447 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:36.335526 systemd[1]: sshd@9-10.0.0.71:22-10.0.0.1:56114.service: Deactivated successfully. Jan 24 00:45:36.341027 systemd[1]: session-11.scope: Deactivated successfully. Jan 24 00:45:36.345964 systemd-logind[1585]: Session 11 logged out. Waiting for processes to exit. Jan 24 00:45:36.360724 systemd-logind[1585]: Removed session 11. Jan 24 00:45:36.377225 kernel: audit: type=1106 audit(1769215536.327:762): pid=5447 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:36.377418 kernel: audit: type=1104 audit(1769215536.327:763): pid=5447 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:36.327000 audit[5447]: CRED_DISP pid=5447 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:36.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.71:22-10.0.0.1:56114 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:45:36.742506 kubelet[2869]: E0124 00:45:36.742264 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5594cdc7fb-c8l7f" podUID="050a17cf-0e04-46c0-ad64-4ce3987ef3d5" Jan 24 00:45:38.756092 kubelet[2869]: E0124 00:45:38.755304 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:45:38.757356 kubelet[2869]: E0124 00:45:38.756930 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:45:39.983802 update_engine[1586]: I20260124 00:45:39.981649 1586 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 24 00:45:39.983802 update_engine[1586]: I20260124 00:45:39.981814 1586 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 24 00:45:39.984936 update_engine[1586]: I20260124 00:45:39.984880 1586 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 24 00:45:40.005828 update_engine[1586]: E20260124 00:45:40.005614 1586 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 24 00:45:40.005828 update_engine[1586]: I20260124 00:45:40.005790 1586 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 24 00:45:40.754381 kubelet[2869]: E0124 00:45:40.753753 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f8f47959d-9fk7m" podUID="e041bbba-486b-4bf8-b212-ca4fbb2d4a57" Jan 24 00:45:41.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.71:22-10.0.0.1:56136 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:45:41.358646 systemd[1]: Started sshd@10-10.0.0.71:22-10.0.0.1:56136.service - OpenSSH per-connection server daemon (10.0.0.1:56136). Jan 24 00:45:41.363903 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:45:41.363996 kernel: audit: type=1130 audit(1769215541.357:765): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.71:22-10.0.0.1:56136 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:45:41.501000 audit[5500]: USER_ACCT pid=5500 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:41.503354 sshd[5500]: Accepted publickey for core from 10.0.0.1 port 56136 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:45:41.507725 sshd-session[5500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:45:41.523812 systemd-logind[1585]: New session 12 of user core. Jan 24 00:45:41.504000 audit[5500]: CRED_ACQ pid=5500 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:41.552827 kernel: audit: type=1101 audit(1769215541.501:766): pid=5500 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:41.552888 kernel: audit: type=1103 audit(1769215541.504:767): pid=5500 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:41.552937 kernel: audit: type=1006 audit(1769215541.504:768): pid=5500 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 24 00:45:41.562328 kernel: audit: type=1300 audit(1769215541.504:768): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffffb02fa30 a2=3 a3=0 items=0 ppid=1 pid=5500 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:41.504000 audit[5500]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffffb02fa30 a2=3 a3=0 items=0 ppid=1 pid=5500 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:41.504000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:45:41.594110 kernel: audit: type=1327 audit(1769215541.504:768): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:45:41.598294 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 24 00:45:41.610000 audit[5500]: USER_START pid=5500 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:41.612000 audit[5504]: CRED_ACQ pid=5504 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:41.683347 kernel: audit: type=1105 audit(1769215541.610:769): pid=5500 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:41.683572 kernel: audit: type=1103 audit(1769215541.612:770): pid=5504 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:41.901719 sshd[5504]: Connection closed by 10.0.0.1 port 56136 Jan 24 00:45:41.903804 sshd-session[5500]: pam_unix(sshd:session): session closed for user core Jan 24 00:45:41.912000 audit[5500]: USER_END pid=5500 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:41.922538 systemd[1]: sshd@10-10.0.0.71:22-10.0.0.1:56136.service: Deactivated successfully. Jan 24 00:45:41.926388 systemd[1]: session-12.scope: Deactivated successfully. Jan 24 00:45:41.937593 systemd-logind[1585]: Session 12 logged out. Waiting for processes to exit. Jan 24 00:45:41.949921 systemd-logind[1585]: Removed session 12. Jan 24 00:45:41.914000 audit[5500]: CRED_DISP pid=5500 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:41.976977 kernel: audit: type=1106 audit(1769215541.912:771): pid=5500 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:41.977113 kernel: audit: type=1104 audit(1769215541.914:772): pid=5500 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:41.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.71:22-10.0.0.1:56136 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:45:42.741355 kubelet[2869]: E0124 00:45:42.740827 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5594cdc7fb-7kk8c" podUID="5a82bd01-5299-411a-9329-279ee1a3e6ef" Jan 24 00:45:42.743061 kubelet[2869]: E0124 00:45:42.742871 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-j2nlt" podUID="0329b08b-e4ed-4b35-88d7-60baae652219" Jan 24 00:45:45.767983 kubelet[2869]: E0124 00:45:45.765325 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-48xkv" podUID="985a1218-3c37-4f6d-aa83-5ce6fdad91a9" Jan 24 00:45:46.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.71:22-10.0.0.1:38236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:45:46.946480 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:45:46.946523 kernel: audit: type=1130 audit(1769215546.940:774): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.71:22-10.0.0.1:38236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:45:46.941009 systemd[1]: Started sshd@11-10.0.0.71:22-10.0.0.1:38236.service - OpenSSH per-connection server daemon (10.0.0.1:38236). Jan 24 00:45:47.113000 audit[5519]: USER_ACCT pid=5519 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:47.116370 sshd[5519]: Accepted publickey for core from 10.0.0.1 port 38236 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:45:47.123863 sshd-session[5519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:45:47.120000 audit[5519]: CRED_ACQ pid=5519 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:47.146109 systemd-logind[1585]: New session 13 of user core. Jan 24 00:45:47.166445 kernel: audit: type=1101 audit(1769215547.113:775): pid=5519 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:47.166684 kernel: audit: type=1103 audit(1769215547.120:776): pid=5519 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:47.166724 kernel: audit: type=1006 audit(1769215547.120:777): pid=5519 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 24 00:45:47.120000 audit[5519]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc09766980 a2=3 a3=0 items=0 ppid=1 pid=5519 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:47.213774 kernel: audit: type=1300 audit(1769215547.120:777): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc09766980 a2=3 a3=0 items=0 ppid=1 pid=5519 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:47.213972 kernel: audit: type=1327 audit(1769215547.120:777): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:45:47.120000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:45:47.227043 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 24 00:45:47.238000 audit[5519]: USER_START pid=5519 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:47.274453 kernel: audit: type=1105 audit(1769215547.238:778): pid=5519 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:47.274759 kernel: audit: type=1103 audit(1769215547.241:779): pid=5523 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:47.241000 audit[5523]: CRED_ACQ pid=5523 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:47.537737 sshd[5523]: Connection closed by 10.0.0.1 port 38236 Jan 24 00:45:47.539120 sshd-session[5519]: pam_unix(sshd:session): session closed for user core Jan 24 00:45:47.547000 audit[5519]: USER_END pid=5519 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:47.554451 systemd[1]: sshd@11-10.0.0.71:22-10.0.0.1:38236.service: Deactivated successfully. Jan 24 00:45:47.559252 systemd[1]: session-13.scope: Deactivated successfully. Jan 24 00:45:47.562311 systemd-logind[1585]: Session 13 logged out. Waiting for processes to exit. Jan 24 00:45:47.572455 systemd-logind[1585]: Removed session 13. Jan 24 00:45:47.547000 audit[5519]: CRED_DISP pid=5519 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:47.602835 kernel: audit: type=1106 audit(1769215547.547:780): pid=5519 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:47.603046 kernel: audit: type=1104 audit(1769215547.547:781): pid=5519 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:47.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.71:22-10.0.0.1:38236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:45:48.747858 containerd[1607]: time="2026-01-24T00:45:48.747803605Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:45:48.848875 containerd[1607]: time="2026-01-24T00:45:48.848716847Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:45:48.855675 containerd[1607]: time="2026-01-24T00:45:48.855427522Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:45:48.855675 containerd[1607]: time="2026-01-24T00:45:48.855542167Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:45:48.857758 kubelet[2869]: E0124 00:45:48.857527 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:45:48.857758 kubelet[2869]: E0124 00:45:48.857670 2869 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:45:48.857758 kubelet[2869]: E0124 00:45:48.857744 2869 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5594cdc7fb-c8l7f_calico-apiserver(050a17cf-0e04-46c0-ad64-4ce3987ef3d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:45:48.858737 kubelet[2869]: E0124 00:45:48.857771 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5594cdc7fb-c8l7f" podUID="050a17cf-0e04-46c0-ad64-4ce3987ef3d5" Jan 24 00:45:49.984330 update_engine[1586]: I20260124 00:45:49.984069 1586 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 24 00:45:49.984836 update_engine[1586]: I20260124 00:45:49.984382 1586 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 24 00:45:49.985378 update_engine[1586]: I20260124 00:45:49.985127 1586 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 24 00:45:50.004328 update_engine[1586]: E20260124 00:45:50.003875 1586 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 24 00:45:50.004517 update_engine[1586]: I20260124 00:45:50.004397 1586 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 24 00:45:50.004517 update_engine[1586]: I20260124 00:45:50.004423 1586 omaha_request_action.cc:617] Omaha request response: Jan 24 00:45:50.004587 update_engine[1586]: E20260124 00:45:50.004534 1586 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 24 00:45:50.004587 update_engine[1586]: I20260124 00:45:50.004575 1586 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 24 00:45:50.004587 update_engine[1586]: I20260124 00:45:50.004586 1586 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 24 00:45:50.004587 update_engine[1586]: I20260124 00:45:50.004685 1586 update_attempter.cc:306] Processing Done. Jan 24 00:45:50.004587 update_engine[1586]: E20260124 00:45:50.004707 1586 update_attempter.cc:619] Update failed. Jan 24 00:45:50.004587 update_engine[1586]: I20260124 00:45:50.004717 1586 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 24 00:45:50.004587 update_engine[1586]: I20260124 00:45:50.004725 1586 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 24 00:45:50.004587 update_engine[1586]: I20260124 00:45:50.004733 1586 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 24 00:45:50.005086 update_engine[1586]: I20260124 00:45:50.004882 1586 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 24 00:45:50.005086 update_engine[1586]: I20260124 00:45:50.004913 1586 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 24 00:45:50.005086 update_engine[1586]: I20260124 00:45:50.004922 1586 omaha_request_action.cc:272] Request: Jan 24 00:45:50.005086 update_engine[1586]: Jan 24 00:45:50.005086 update_engine[1586]: Jan 24 00:45:50.005086 update_engine[1586]: Jan 24 00:45:50.005086 update_engine[1586]: Jan 24 00:45:50.005086 update_engine[1586]: Jan 24 00:45:50.005086 update_engine[1586]: Jan 24 00:45:50.005086 update_engine[1586]: I20260124 00:45:50.004931 1586 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 24 00:45:50.005086 update_engine[1586]: I20260124 00:45:50.004962 1586 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 24 00:45:50.005728 update_engine[1586]: I20260124 00:45:50.005544 1586 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 24 00:45:50.007344 locksmithd[1650]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 24 00:45:50.027457 update_engine[1586]: E20260124 00:45:50.026808 1586 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 24 00:45:50.027457 update_engine[1586]: I20260124 00:45:50.027032 1586 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 24 00:45:50.027457 update_engine[1586]: I20260124 00:45:50.027056 1586 omaha_request_action.cc:617] Omaha request response: Jan 24 00:45:50.027457 update_engine[1586]: I20260124 00:45:50.027068 1586 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 24 00:45:50.027457 update_engine[1586]: I20260124 00:45:50.027078 1586 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 24 00:45:50.027457 update_engine[1586]: I20260124 00:45:50.027086 1586 update_attempter.cc:306] Processing Done. Jan 24 00:45:50.027457 update_engine[1586]: I20260124 00:45:50.027096 1586 update_attempter.cc:310] Error event sent. Jan 24 00:45:50.027457 update_engine[1586]: I20260124 00:45:50.027109 1586 update_check_scheduler.cc:74] Next update check in 47m15s Jan 24 00:45:50.032489 locksmithd[1650]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 24 00:45:50.743076 kubelet[2869]: E0124 00:45:50.742104 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f7d444f9d-54g8g" podUID="f244c052-aa71-4ccd-aaea-117d2939edf5" Jan 24 00:45:52.565910 systemd[1]: Started sshd@12-10.0.0.71:22-10.0.0.1:45298.service - OpenSSH per-connection server daemon (10.0.0.1:45298). Jan 24 00:45:52.577325 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:45:52.577423 kernel: audit: type=1130 audit(1769215552.564:783): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.71:22-10.0.0.1:45298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:45:52.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.71:22-10.0.0.1:45298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:45:52.711000 audit[5539]: USER_ACCT pid=5539 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:52.714804 sshd[5539]: Accepted publickey for core from 10.0.0.1 port 45298 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:45:52.717838 sshd-session[5539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:45:52.738428 kernel: audit: type=1101 audit(1769215552.711:784): pid=5539 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:52.739393 kernel: audit: type=1103 audit(1769215552.714:785): pid=5539 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:52.714000 audit[5539]: CRED_ACQ pid=5539 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:52.737620 systemd-logind[1585]: New session 14 of user core. Jan 24 00:45:52.774412 kernel: audit: type=1006 audit(1769215552.714:786): pid=5539 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 24 00:45:52.774631 kernel: audit: type=1300 audit(1769215552.714:786): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe5232ded0 a2=3 a3=0 items=0 ppid=1 pid=5539 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:52.714000 audit[5539]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe5232ded0 a2=3 a3=0 items=0 ppid=1 pid=5539 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:52.714000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:45:52.809602 kernel: audit: type=1327 audit(1769215552.714:786): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:45:52.816042 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 24 00:45:52.823000 audit[5539]: USER_START pid=5539 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:52.826000 audit[5543]: CRED_ACQ pid=5543 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:52.881277 kernel: audit: type=1105 audit(1769215552.823:787): pid=5539 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:52.881343 kernel: audit: type=1103 audit(1769215552.826:788): pid=5543 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:53.058288 sshd[5543]: Connection closed by 10.0.0.1 port 45298 Jan 24 00:45:53.058864 sshd-session[5539]: pam_unix(sshd:session): session closed for user core Jan 24 00:45:53.062000 audit[5539]: USER_END pid=5539 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:53.067509 systemd[1]: sshd@12-10.0.0.71:22-10.0.0.1:45298.service: Deactivated successfully. Jan 24 00:45:53.073893 systemd[1]: session-14.scope: Deactivated successfully. Jan 24 00:45:53.086413 systemd-logind[1585]: Session 14 logged out. Waiting for processes to exit. Jan 24 00:45:53.088586 systemd-logind[1585]: Removed session 14. Jan 24 00:45:53.062000 audit[5539]: CRED_DISP pid=5539 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:53.130747 kernel: audit: type=1106 audit(1769215553.062:789): pid=5539 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:53.130997 kernel: audit: type=1104 audit(1769215553.062:790): pid=5539 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:53.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.71:22-10.0.0.1:45298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:45:53.750079 containerd[1607]: time="2026-01-24T00:45:53.748352086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:45:53.839510 containerd[1607]: time="2026-01-24T00:45:53.838924771Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:45:53.843275 containerd[1607]: time="2026-01-24T00:45:53.843098388Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:45:53.843444 containerd[1607]: time="2026-01-24T00:45:53.843390478Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 24 00:45:53.843640 kubelet[2869]: E0124 00:45:53.843545 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:45:53.843640 kubelet[2869]: E0124 00:45:53.843597 2869 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:45:53.844326 kubelet[2869]: E0124 00:45:53.843810 2869 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5f8f47959d-9fk7m_calico-system(e041bbba-486b-4bf8-b212-ca4fbb2d4a57): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:45:53.846631 containerd[1607]: time="2026-01-24T00:45:53.846577886Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:45:53.927469 containerd[1607]: time="2026-01-24T00:45:53.927114443Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:45:53.935407 containerd[1607]: time="2026-01-24T00:45:53.935281756Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:45:53.935565 containerd[1607]: time="2026-01-24T00:45:53.935437738Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 24 00:45:53.938580 kubelet[2869]: E0124 00:45:53.937376 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:45:53.938580 kubelet[2869]: E0124 00:45:53.937491 2869 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:45:53.938580 kubelet[2869]: E0124 00:45:53.937583 2869 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5f8f47959d-9fk7m_calico-system(e041bbba-486b-4bf8-b212-ca4fbb2d4a57): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:45:53.938580 kubelet[2869]: E0124 00:45:53.937628 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f8f47959d-9fk7m" podUID="e041bbba-486b-4bf8-b212-ca4fbb2d4a57" Jan 24 00:45:56.750789 containerd[1607]: time="2026-01-24T00:45:56.750656662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:45:56.848390 containerd[1607]: time="2026-01-24T00:45:56.847934272Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:45:56.851094 containerd[1607]: time="2026-01-24T00:45:56.850970736Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:45:56.851094 containerd[1607]: time="2026-01-24T00:45:56.851066636Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:45:56.852386 kubelet[2869]: E0124 00:45:56.851521 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:45:56.852386 kubelet[2869]: E0124 00:45:56.851566 2869 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:45:56.852386 kubelet[2869]: E0124 00:45:56.851828 2869 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5594cdc7fb-7kk8c_calico-apiserver(5a82bd01-5299-411a-9329-279ee1a3e6ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:45:56.852386 kubelet[2869]: E0124 00:45:56.851873 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5594cdc7fb-7kk8c" podUID="5a82bd01-5299-411a-9329-279ee1a3e6ef" Jan 24 00:45:56.854493 containerd[1607]: time="2026-01-24T00:45:56.854382248Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:45:56.927449 containerd[1607]: time="2026-01-24T00:45:56.927329778Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:45:56.931318 containerd[1607]: time="2026-01-24T00:45:56.931093518Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:45:56.931612 containerd[1607]: time="2026-01-24T00:45:56.931362133Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 24 00:45:56.932124 kubelet[2869]: E0124 00:45:56.931511 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:45:56.932124 kubelet[2869]: E0124 00:45:56.931557 2869 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:45:56.932124 kubelet[2869]: E0124 00:45:56.931637 2869 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-j2nlt_calico-system(0329b08b-e4ed-4b35-88d7-60baae652219): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:45:56.932124 kubelet[2869]: E0124 00:45:56.931677 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-j2nlt" podUID="0329b08b-e4ed-4b35-88d7-60baae652219" Jan 24 00:45:58.076561 systemd[1]: Started sshd@13-10.0.0.71:22-10.0.0.1:45382.service - OpenSSH per-connection server daemon (10.0.0.1:45382). Jan 24 00:45:58.082587 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:45:58.083495 kernel: audit: type=1130 audit(1769215558.076:792): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.71:22-10.0.0.1:45382 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:45:58.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.71:22-10.0.0.1:45382 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:45:58.195000 audit[5565]: USER_ACCT pid=5565 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:58.196548 sshd[5565]: Accepted publickey for core from 10.0.0.1 port 45382 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:45:58.200311 sshd-session[5565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:45:58.214104 systemd-logind[1585]: New session 15 of user core. Jan 24 00:45:58.197000 audit[5565]: CRED_ACQ pid=5565 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:58.242119 kernel: audit: type=1101 audit(1769215558.195:793): pid=5565 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:58.242346 kernel: audit: type=1103 audit(1769215558.197:794): pid=5565 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:58.197000 audit[5565]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc23c86a00 a2=3 a3=0 items=0 ppid=1 pid=5565 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:58.278107 kernel: audit: type=1006 audit(1769215558.197:795): pid=5565 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 24 00:45:58.278312 kernel: audit: type=1300 audit(1769215558.197:795): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc23c86a00 a2=3 a3=0 items=0 ppid=1 pid=5565 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:58.278347 kernel: audit: type=1327 audit(1769215558.197:795): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:45:58.197000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:45:58.288631 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 24 00:45:58.296000 audit[5565]: USER_START pid=5565 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:58.300000 audit[5569]: CRED_ACQ pid=5569 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:58.355617 kernel: audit: type=1105 audit(1769215558.296:796): pid=5565 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:58.355857 kernel: audit: type=1103 audit(1769215558.300:797): pid=5569 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:58.485543 sshd[5569]: Connection closed by 10.0.0.1 port 45382 Jan 24 00:45:58.486351 sshd-session[5565]: pam_unix(sshd:session): session closed for user core Jan 24 00:45:58.489000 audit[5565]: USER_END pid=5565 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:58.490000 audit[5565]: CRED_DISP pid=5565 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:58.548884 kernel: audit: type=1106 audit(1769215558.489:798): pid=5565 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:58.549407 kernel: audit: type=1104 audit(1769215558.490:799): pid=5565 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:58.556959 systemd[1]: sshd@13-10.0.0.71:22-10.0.0.1:45382.service: Deactivated successfully. Jan 24 00:45:58.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.71:22-10.0.0.1:45382 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:45:58.562103 systemd[1]: session-15.scope: Deactivated successfully. Jan 24 00:45:58.566435 systemd-logind[1585]: Session 15 logged out. Waiting for processes to exit. Jan 24 00:45:58.573131 systemd[1]: Started sshd@14-10.0.0.71:22-10.0.0.1:45398.service - OpenSSH per-connection server daemon (10.0.0.1:45398). Jan 24 00:45:58.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.71:22-10.0.0.1:45398 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:45:58.575131 systemd-logind[1585]: Removed session 15. Jan 24 00:45:58.696000 audit[5583]: USER_ACCT pid=5583 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:58.697509 sshd[5583]: Accepted publickey for core from 10.0.0.1 port 45398 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:45:58.702000 audit[5583]: CRED_ACQ pid=5583 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:58.702000 audit[5583]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe5947f2e0 a2=3 a3=0 items=0 ppid=1 pid=5583 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:58.702000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:45:58.704990 sshd-session[5583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:45:58.722466 systemd-logind[1585]: New session 16 of user core. Jan 24 00:45:58.733098 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 24 00:45:58.739000 audit[5583]: USER_START pid=5583 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:58.745000 audit[5587]: CRED_ACQ pid=5587 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:58.997516 sshd[5587]: Connection closed by 10.0.0.1 port 45398 Jan 24 00:45:58.993424 sshd-session[5583]: pam_unix(sshd:session): session closed for user core Jan 24 00:45:58.999000 audit[5583]: USER_END pid=5583 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:58.999000 audit[5583]: CRED_DISP pid=5583 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:59.009701 systemd[1]: sshd@14-10.0.0.71:22-10.0.0.1:45398.service: Deactivated successfully. Jan 24 00:45:59.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.71:22-10.0.0.1:45398 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:45:59.013652 systemd[1]: session-16.scope: Deactivated successfully. Jan 24 00:45:59.020054 systemd-logind[1585]: Session 16 logged out. Waiting for processes to exit. Jan 24 00:45:59.026870 systemd[1]: Started sshd@15-10.0.0.71:22-10.0.0.1:45402.service - OpenSSH per-connection server daemon (10.0.0.1:45402). Jan 24 00:45:59.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.71:22-10.0.0.1:45402 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:45:59.032036 systemd-logind[1585]: Removed session 16. Jan 24 00:45:59.147966 sshd[5599]: Accepted publickey for core from 10.0.0.1 port 45402 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:45:59.147000 audit[5599]: USER_ACCT pid=5599 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:59.151000 audit[5599]: CRED_ACQ pid=5599 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:59.152000 audit[5599]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd976d0830 a2=3 a3=0 items=0 ppid=1 pid=5599 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:45:59.152000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:45:59.154499 sshd-session[5599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:45:59.179985 systemd-logind[1585]: New session 17 of user core. Jan 24 00:45:59.194970 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 24 00:45:59.210000 audit[5599]: USER_START pid=5599 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:59.218000 audit[5603]: CRED_ACQ pid=5603 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:59.473425 sshd[5603]: Connection closed by 10.0.0.1 port 45402 Jan 24 00:45:59.474039 sshd-session[5599]: pam_unix(sshd:session): session closed for user core Jan 24 00:45:59.481000 audit[5599]: USER_END pid=5599 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:59.481000 audit[5599]: CRED_DISP pid=5599 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:45:59.489507 systemd[1]: sshd@15-10.0.0.71:22-10.0.0.1:45402.service: Deactivated successfully. Jan 24 00:45:59.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.71:22-10.0.0.1:45402 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:45:59.494433 systemd[1]: session-17.scope: Deactivated successfully. Jan 24 00:45:59.498402 systemd-logind[1585]: Session 17 logged out. Waiting for processes to exit. Jan 24 00:45:59.504486 systemd-logind[1585]: Removed session 17. Jan 24 00:45:59.744522 kubelet[2869]: E0124 00:45:59.743868 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5594cdc7fb-c8l7f" podUID="050a17cf-0e04-46c0-ad64-4ce3987ef3d5" Jan 24 00:46:00.744860 containerd[1607]: time="2026-01-24T00:46:00.744001315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:46:00.823067 containerd[1607]: time="2026-01-24T00:46:00.822724905Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:46:00.828429 containerd[1607]: time="2026-01-24T00:46:00.827906835Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:46:00.828429 containerd[1607]: time="2026-01-24T00:46:00.827998185Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 24 00:46:00.829377 kubelet[2869]: E0124 00:46:00.828485 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:46:00.829377 kubelet[2869]: E0124 00:46:00.828526 2869 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:46:00.829377 kubelet[2869]: E0124 00:46:00.828600 2869 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-48xkv_calico-system(985a1218-3c37-4f6d-aa83-5ce6fdad91a9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:46:00.831052 containerd[1607]: time="2026-01-24T00:46:00.830948427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:46:00.911113 containerd[1607]: time="2026-01-24T00:46:00.905698596Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:46:00.911113 containerd[1607]: time="2026-01-24T00:46:00.911025595Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:46:00.911113 containerd[1607]: time="2026-01-24T00:46:00.911090761Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 24 00:46:00.912076 kubelet[2869]: E0124 00:46:00.911952 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:46:00.912076 kubelet[2869]: E0124 00:46:00.912009 2869 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:46:00.912366 kubelet[2869]: E0124 00:46:00.912103 2869 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-48xkv_calico-system(985a1218-3c37-4f6d-aa83-5ce6fdad91a9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:46:00.912366 kubelet[2869]: E0124 00:46:00.912299 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-48xkv" podUID="985a1218-3c37-4f6d-aa83-5ce6fdad91a9" Jan 24 00:46:03.744059 containerd[1607]: time="2026-01-24T00:46:03.743559171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:46:03.821932 containerd[1607]: time="2026-01-24T00:46:03.821512118Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:46:03.824866 containerd[1607]: time="2026-01-24T00:46:03.824513272Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:46:03.824866 containerd[1607]: time="2026-01-24T00:46:03.824670878Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 24 00:46:03.825460 kubelet[2869]: E0124 00:46:03.825059 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:46:03.825460 kubelet[2869]: E0124 00:46:03.825370 2869 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:46:03.826092 kubelet[2869]: E0124 00:46:03.825535 2869 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5f7d444f9d-54g8g_calico-system(f244c052-aa71-4ccd-aaea-117d2939edf5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:46:03.826092 kubelet[2869]: E0124 00:46:03.825580 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f7d444f9d-54g8g" podUID="f244c052-aa71-4ccd-aaea-117d2939edf5" Jan 24 00:46:04.512974 systemd[1]: Started sshd@16-10.0.0.71:22-10.0.0.1:46446.service - OpenSSH per-connection server daemon (10.0.0.1:46446). Jan 24 00:46:04.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.71:22-10.0.0.1:46446 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:46:04.521416 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 24 00:46:04.521526 kernel: audit: type=1130 audit(1769215564.513:819): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.71:22-10.0.0.1:46446 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:46:04.706746 sshd[5619]: Accepted publickey for core from 10.0.0.1 port 46446 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:46:04.706000 audit[5619]: USER_ACCT pid=5619 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:04.711685 sshd-session[5619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:46:04.736347 kernel: audit: type=1101 audit(1769215564.706:820): pid=5619 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:04.736619 kernel: audit: type=1103 audit(1769215564.709:821): pid=5619 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:04.709000 audit[5619]: CRED_ACQ pid=5619 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:04.736929 systemd-logind[1585]: New session 18 of user core. Jan 24 00:46:04.744532 kubelet[2869]: E0124 00:46:04.744384 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f8f47959d-9fk7m" podUID="e041bbba-486b-4bf8-b212-ca4fbb2d4a57" Jan 24 00:46:04.793043 kernel: audit: type=1006 audit(1769215564.709:822): pid=5619 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 24 00:46:04.793333 kernel: audit: type=1300 audit(1769215564.709:822): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe45c58e20 a2=3 a3=0 items=0 ppid=1 pid=5619 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:46:04.709000 audit[5619]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe45c58e20 a2=3 a3=0 items=0 ppid=1 pid=5619 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:46:04.709000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:46:04.836752 kernel: audit: type=1327 audit(1769215564.709:822): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:46:04.835565 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 24 00:46:04.847000 audit[5619]: USER_START pid=5619 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:04.883430 kernel: audit: type=1105 audit(1769215564.847:823): pid=5619 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:04.883522 kernel: audit: type=1103 audit(1769215564.852:824): pid=5623 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:04.852000 audit[5623]: CRED_ACQ pid=5623 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:05.169752 sshd[5623]: Connection closed by 10.0.0.1 port 46446 Jan 24 00:46:05.172476 sshd-session[5619]: pam_unix(sshd:session): session closed for user core Jan 24 00:46:05.175000 audit[5619]: USER_END pid=5619 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:05.183462 systemd[1]: sshd@16-10.0.0.71:22-10.0.0.1:46446.service: Deactivated successfully. Jan 24 00:46:05.184466 systemd-logind[1585]: Session 18 logged out. Waiting for processes to exit. Jan 24 00:46:05.192534 systemd[1]: session-18.scope: Deactivated successfully. Jan 24 00:46:05.197370 systemd-logind[1585]: Removed session 18. Jan 24 00:46:05.175000 audit[5619]: CRED_DISP pid=5619 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:05.243286 kernel: audit: type=1106 audit(1769215565.175:825): pid=5619 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:05.243433 kernel: audit: type=1104 audit(1769215565.175:826): pid=5619 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:05.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.71:22-10.0.0.1:46446 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:46:07.757799 kubelet[2869]: E0124 00:46:07.754053 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5594cdc7fb-7kk8c" podUID="5a82bd01-5299-411a-9329-279ee1a3e6ef" Jan 24 00:46:08.741325 kubelet[2869]: E0124 00:46:08.741271 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-j2nlt" podUID="0329b08b-e4ed-4b35-88d7-60baae652219" Jan 24 00:46:09.740809 kubelet[2869]: E0124 00:46:09.739576 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:46:10.216416 systemd[1]: Started sshd@17-10.0.0.71:22-10.0.0.1:46510.service - OpenSSH per-connection server daemon (10.0.0.1:46510). Jan 24 00:46:10.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.71:22-10.0.0.1:46510 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:46:10.224286 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:46:10.224743 kernel: audit: type=1130 audit(1769215570.216:828): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.71:22-10.0.0.1:46510 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:46:10.364000 audit[5668]: USER_ACCT pid=5668 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:10.373782 sshd-session[5668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:46:10.378509 sshd[5668]: Accepted publickey for core from 10.0.0.1 port 46510 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:46:10.369000 audit[5668]: CRED_ACQ pid=5668 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:10.396744 systemd-logind[1585]: New session 19 of user core. Jan 24 00:46:10.420339 kernel: audit: type=1101 audit(1769215570.364:829): pid=5668 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:10.420465 kernel: audit: type=1103 audit(1769215570.369:830): pid=5668 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:10.436353 kernel: audit: type=1006 audit(1769215570.369:831): pid=5668 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 24 00:46:10.436457 kernel: audit: type=1300 audit(1769215570.369:831): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff09ad1930 a2=3 a3=0 items=0 ppid=1 pid=5668 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:46:10.369000 audit[5668]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff09ad1930 a2=3 a3=0 items=0 ppid=1 pid=5668 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:46:10.369000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:46:10.477721 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 24 00:46:10.488403 kernel: audit: type=1327 audit(1769215570.369:831): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:46:10.488000 audit[5668]: USER_START pid=5668 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:10.538357 kernel: audit: type=1105 audit(1769215570.488:832): pid=5668 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:10.541000 audit[5672]: CRED_ACQ pid=5672 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:10.569433 kernel: audit: type=1103 audit(1769215570.541:833): pid=5672 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:10.842091 sshd[5672]: Connection closed by 10.0.0.1 port 46510 Jan 24 00:46:10.843450 sshd-session[5668]: pam_unix(sshd:session): session closed for user core Jan 24 00:46:10.852000 audit[5668]: USER_END pid=5668 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:10.858429 systemd[1]: sshd@17-10.0.0.71:22-10.0.0.1:46510.service: Deactivated successfully. Jan 24 00:46:10.864459 systemd[1]: session-19.scope: Deactivated successfully. Jan 24 00:46:10.871433 systemd-logind[1585]: Session 19 logged out. Waiting for processes to exit. Jan 24 00:46:10.878456 systemd-logind[1585]: Removed session 19. Jan 24 00:46:10.852000 audit[5668]: CRED_DISP pid=5668 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:10.920360 kernel: audit: type=1106 audit(1769215570.852:834): pid=5668 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:10.920466 kernel: audit: type=1104 audit(1769215570.852:835): pid=5668 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:10.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.71:22-10.0.0.1:46510 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:46:11.755272 kubelet[2869]: E0124 00:46:11.754645 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-48xkv" podUID="985a1218-3c37-4f6d-aa83-5ce6fdad91a9" Jan 24 00:46:13.745306 kubelet[2869]: E0124 00:46:13.744893 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5594cdc7fb-c8l7f" podUID="050a17cf-0e04-46c0-ad64-4ce3987ef3d5" Jan 24 00:46:14.741547 kubelet[2869]: E0124 00:46:14.740899 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f7d444f9d-54g8g" podUID="f244c052-aa71-4ccd-aaea-117d2939edf5" Jan 24 00:46:15.750339 kubelet[2869]: E0124 00:46:15.746852 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f8f47959d-9fk7m" podUID="e041bbba-486b-4bf8-b212-ca4fbb2d4a57" Jan 24 00:46:15.895837 systemd[1]: Started sshd@18-10.0.0.71:22-10.0.0.1:38110.service - OpenSSH per-connection server daemon (10.0.0.1:38110). Jan 24 00:46:15.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.71:22-10.0.0.1:38110 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:46:15.904301 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:46:15.904366 kernel: audit: type=1130 audit(1769215575.895:837): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.71:22-10.0.0.1:38110 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:46:16.085359 sshd[5689]: Accepted publickey for core from 10.0.0.1 port 38110 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:46:16.084000 audit[5689]: USER_ACCT pid=5689 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:16.091667 sshd-session[5689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:46:16.112449 systemd-logind[1585]: New session 20 of user core. Jan 24 00:46:16.122687 kernel: audit: type=1101 audit(1769215576.084:838): pid=5689 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:16.122751 kernel: audit: type=1103 audit(1769215576.089:839): pid=5689 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:16.089000 audit[5689]: CRED_ACQ pid=5689 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:16.174423 kernel: audit: type=1006 audit(1769215576.089:840): pid=5689 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 24 00:46:16.174514 kernel: audit: type=1300 audit(1769215576.089:840): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff4dcfa940 a2=3 a3=0 items=0 ppid=1 pid=5689 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:46:16.089000 audit[5689]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff4dcfa940 a2=3 a3=0 items=0 ppid=1 pid=5689 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:46:16.089000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:46:16.206381 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 24 00:46:16.218000 audit[5689]: USER_START pid=5689 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:16.283782 kernel: audit: type=1327 audit(1769215576.089:840): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:46:16.283887 kernel: audit: type=1105 audit(1769215576.218:841): pid=5689 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:16.284367 kernel: audit: type=1103 audit(1769215576.223:842): pid=5693 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:16.223000 audit[5693]: CRED_ACQ pid=5693 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:16.520878 sshd[5693]: Connection closed by 10.0.0.1 port 38110 Jan 24 00:46:16.520965 sshd-session[5689]: pam_unix(sshd:session): session closed for user core Jan 24 00:46:16.529000 audit[5689]: USER_END pid=5689 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:16.535729 systemd[1]: sshd@18-10.0.0.71:22-10.0.0.1:38110.service: Deactivated successfully. Jan 24 00:46:16.546607 systemd[1]: session-20.scope: Deactivated successfully. Jan 24 00:46:16.552532 systemd-logind[1585]: Session 20 logged out. Waiting for processes to exit. Jan 24 00:46:16.555444 systemd-logind[1585]: Removed session 20. Jan 24 00:46:16.529000 audit[5689]: CRED_DISP pid=5689 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:16.602479 kernel: audit: type=1106 audit(1769215576.529:843): pid=5689 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:16.602606 kernel: audit: type=1104 audit(1769215576.529:844): pid=5689 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:16.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.71:22-10.0.0.1:38110 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:46:21.564792 systemd[1]: Started sshd@19-10.0.0.71:22-10.0.0.1:38120.service - OpenSSH per-connection server daemon (10.0.0.1:38120). Jan 24 00:46:21.586927 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:46:21.587573 kernel: audit: type=1130 audit(1769215581.563:846): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.71:22-10.0.0.1:38120 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:46:21.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.71:22-10.0.0.1:38120 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:46:21.747700 kubelet[2869]: E0124 00:46:21.747018 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:46:21.880433 sshd[5706]: Accepted publickey for core from 10.0.0.1 port 38120 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:46:21.877000 audit[5706]: USER_ACCT pid=5706 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:21.888573 sshd-session[5706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:46:21.921605 systemd-logind[1585]: New session 21 of user core. Jan 24 00:46:21.926403 kernel: audit: type=1101 audit(1769215581.877:847): pid=5706 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:21.880000 audit[5706]: CRED_ACQ pid=5706 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:21.972727 kernel: audit: type=1103 audit(1769215581.880:848): pid=5706 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:21.977885 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 24 00:46:21.880000 audit[5706]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd0fb1f1e0 a2=3 a3=0 items=0 ppid=1 pid=5706 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:46:22.049593 kernel: audit: type=1006 audit(1769215581.880:849): pid=5706 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 24 00:46:22.049713 kernel: audit: type=1300 audit(1769215581.880:849): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd0fb1f1e0 a2=3 a3=0 items=0 ppid=1 pid=5706 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:46:21.880000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:46:22.065402 kernel: audit: type=1327 audit(1769215581.880:849): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:46:21.997000 audit[5706]: USER_START pid=5706 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:22.118605 kernel: audit: type=1105 audit(1769215581.997:850): pid=5706 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:22.003000 audit[5710]: CRED_ACQ pid=5710 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:22.154412 kernel: audit: type=1103 audit(1769215582.003:851): pid=5710 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:22.441442 sshd[5710]: Connection closed by 10.0.0.1 port 38120 Jan 24 00:46:22.445331 sshd-session[5706]: pam_unix(sshd:session): session closed for user core Jan 24 00:46:22.474000 audit[5706]: USER_END pid=5706 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:22.486594 systemd[1]: sshd@19-10.0.0.71:22-10.0.0.1:38120.service: Deactivated successfully. Jan 24 00:46:22.496721 systemd[1]: session-21.scope: Deactivated successfully. Jan 24 00:46:22.509277 systemd-logind[1585]: Session 21 logged out. Waiting for processes to exit. Jan 24 00:46:22.514965 systemd-logind[1585]: Removed session 21. Jan 24 00:46:22.529695 kernel: audit: type=1106 audit(1769215582.474:852): pid=5706 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:22.474000 audit[5706]: CRED_DISP pid=5706 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:22.565376 kernel: audit: type=1104 audit(1769215582.474:853): pid=5706 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:22.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.71:22-10.0.0.1:38120 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:46:22.749384 kubelet[2869]: E0124 00:46:22.747765 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5594cdc7fb-7kk8c" podUID="5a82bd01-5299-411a-9329-279ee1a3e6ef" Jan 24 00:46:22.753633 kubelet[2869]: E0124 00:46:22.753555 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-48xkv" podUID="985a1218-3c37-4f6d-aa83-5ce6fdad91a9" Jan 24 00:46:23.751812 kubelet[2869]: E0124 00:46:23.746647 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-j2nlt" podUID="0329b08b-e4ed-4b35-88d7-60baae652219" Jan 24 00:46:24.743324 kubelet[2869]: E0124 00:46:24.742906 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5594cdc7fb-c8l7f" podUID="050a17cf-0e04-46c0-ad64-4ce3987ef3d5" Jan 24 00:46:25.743085 kubelet[2869]: E0124 00:46:25.742491 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:46:27.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.71:22-10.0.0.1:42900 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:46:27.484704 systemd[1]: Started sshd@20-10.0.0.71:22-10.0.0.1:42900.service - OpenSSH per-connection server daemon (10.0.0.1:42900). Jan 24 00:46:27.504601 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:46:27.504684 kernel: audit: type=1130 audit(1769215587.482:855): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.71:22-10.0.0.1:42900 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:46:27.743575 kubelet[2869]: E0124 00:46:27.741526 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:46:27.753594 kubelet[2869]: E0124 00:46:27.752840 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f8f47959d-9fk7m" podUID="e041bbba-486b-4bf8-b212-ca4fbb2d4a57" Jan 24 00:46:27.781000 audit[5725]: USER_ACCT pid=5725 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:27.788960 sshd-session[5725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:46:27.792950 sshd[5725]: Accepted publickey for core from 10.0.0.1 port 42900 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:46:27.820479 kernel: audit: type=1101 audit(1769215587.781:856): pid=5725 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:27.820575 kernel: audit: type=1103 audit(1769215587.785:857): pid=5725 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:27.785000 audit[5725]: CRED_ACQ pid=5725 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:27.844746 systemd-logind[1585]: New session 22 of user core. Jan 24 00:46:27.870820 kernel: audit: type=1006 audit(1769215587.785:858): pid=5725 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 24 00:46:27.785000 audit[5725]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc28c0b730 a2=3 a3=0 items=0 ppid=1 pid=5725 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:46:27.938973 kernel: audit: type=1300 audit(1769215587.785:858): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc28c0b730 a2=3 a3=0 items=0 ppid=1 pid=5725 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:46:27.939110 kernel: audit: type=1327 audit(1769215587.785:858): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:46:27.785000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:46:27.956711 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 24 00:46:27.971000 audit[5725]: USER_START pid=5725 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:27.978000 audit[5729]: CRED_ACQ pid=5729 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:28.047838 kernel: audit: type=1105 audit(1769215587.971:859): pid=5725 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:28.047958 kernel: audit: type=1103 audit(1769215587.978:860): pid=5729 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:28.293801 sshd[5729]: Connection closed by 10.0.0.1 port 42900 Jan 24 00:46:28.294376 sshd-session[5725]: pam_unix(sshd:session): session closed for user core Jan 24 00:46:28.297000 audit[5725]: USER_END pid=5725 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:28.305608 systemd[1]: sshd@20-10.0.0.71:22-10.0.0.1:42900.service: Deactivated successfully. Jan 24 00:46:28.317361 systemd[1]: session-22.scope: Deactivated successfully. Jan 24 00:46:28.323510 systemd-logind[1585]: Session 22 logged out. Waiting for processes to exit. Jan 24 00:46:28.297000 audit[5725]: CRED_DISP pid=5725 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:28.344683 systemd-logind[1585]: Removed session 22. Jan 24 00:46:28.366997 kernel: audit: type=1106 audit(1769215588.297:861): pid=5725 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:28.367086 kernel: audit: type=1104 audit(1769215588.297:862): pid=5725 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:28.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.71:22-10.0.0.1:42900 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:46:29.754414 kubelet[2869]: E0124 00:46:29.751882 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f7d444f9d-54g8g" podUID="f244c052-aa71-4ccd-aaea-117d2939edf5" Jan 24 00:46:33.326606 systemd[1]: Started sshd@21-10.0.0.71:22-10.0.0.1:45786.service - OpenSSH per-connection server daemon (10.0.0.1:45786). Jan 24 00:46:33.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.71:22-10.0.0.1:45786 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:46:33.339046 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:46:33.339126 kernel: audit: type=1130 audit(1769215593.326:864): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.71:22-10.0.0.1:45786 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:46:33.501000 audit[5743]: USER_ACCT pid=5743 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:33.504529 sshd[5743]: Accepted publickey for core from 10.0.0.1 port 45786 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:46:33.547474 kernel: audit: type=1101 audit(1769215593.501:865): pid=5743 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:33.547000 audit[5743]: CRED_ACQ pid=5743 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:33.550801 sshd-session[5743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:46:33.573488 systemd-logind[1585]: New session 23 of user core. Jan 24 00:46:33.587456 kernel: audit: type=1103 audit(1769215593.547:866): pid=5743 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:33.587558 kernel: audit: type=1006 audit(1769215593.547:867): pid=5743 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 24 00:46:33.547000 audit[5743]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc54eca840 a2=3 a3=0 items=0 ppid=1 pid=5743 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:46:33.651555 kernel: audit: type=1300 audit(1769215593.547:867): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc54eca840 a2=3 a3=0 items=0 ppid=1 pid=5743 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:46:33.547000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:46:33.653700 kernel: audit: type=1327 audit(1769215593.547:867): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:46:33.653735 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 24 00:46:33.668000 audit[5743]: USER_START pid=5743 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:33.723001 kernel: audit: type=1105 audit(1769215593.668:868): pid=5743 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:33.723111 kernel: audit: type=1103 audit(1769215593.673:869): pid=5747 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:33.673000 audit[5747]: CRED_ACQ pid=5747 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:33.751909 kubelet[2869]: E0124 00:46:33.751836 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5594cdc7fb-7kk8c" podUID="5a82bd01-5299-411a-9329-279ee1a3e6ef" Jan 24 00:46:34.018484 sshd[5747]: Connection closed by 10.0.0.1 port 45786 Jan 24 00:46:34.018710 sshd-session[5743]: pam_unix(sshd:session): session closed for user core Jan 24 00:46:34.028000 audit[5743]: USER_END pid=5743 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:34.074424 kernel: audit: type=1106 audit(1769215594.028:870): pid=5743 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:34.029000 audit[5743]: CRED_DISP pid=5743 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:34.080103 systemd[1]: sshd@21-10.0.0.71:22-10.0.0.1:45786.service: Deactivated successfully. Jan 24 00:46:34.087632 systemd[1]: session-23.scope: Deactivated successfully. Jan 24 00:46:34.090995 systemd-logind[1585]: Session 23 logged out. Waiting for processes to exit. Jan 24 00:46:34.097984 systemd[1]: Started sshd@22-10.0.0.71:22-10.0.0.1:45798.service - OpenSSH per-connection server daemon (10.0.0.1:45798). Jan 24 00:46:34.105547 kernel: audit: type=1104 audit(1769215594.029:871): pid=5743 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:34.104491 systemd-logind[1585]: Removed session 23. Jan 24 00:46:34.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.71:22-10.0.0.1:45786 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:46:34.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.71:22-10.0.0.1:45798 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:46:34.239000 audit[5762]: USER_ACCT pid=5762 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:34.240722 sshd[5762]: Accepted publickey for core from 10.0.0.1 port 45798 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:46:34.241000 audit[5762]: CRED_ACQ pid=5762 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:34.241000 audit[5762]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce3be9b20 a2=3 a3=0 items=0 ppid=1 pid=5762 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:46:34.241000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:46:34.244722 sshd-session[5762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:46:34.259608 systemd-logind[1585]: New session 24 of user core. Jan 24 00:46:34.267483 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 24 00:46:34.271000 audit[5762]: USER_START pid=5762 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:34.278000 audit[5766]: CRED_ACQ pid=5766 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:34.741392 kubelet[2869]: E0124 00:46:34.740993 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:46:34.746681 kubelet[2869]: E0124 00:46:34.746614 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-j2nlt" podUID="0329b08b-e4ed-4b35-88d7-60baae652219" Jan 24 00:46:35.305099 sshd[5766]: Connection closed by 10.0.0.1 port 45798 Jan 24 00:46:35.309865 sshd-session[5762]: pam_unix(sshd:session): session closed for user core Jan 24 00:46:35.313000 audit[5762]: USER_END pid=5762 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:35.313000 audit[5762]: CRED_DISP pid=5762 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:35.329120 systemd[1]: Started sshd@23-10.0.0.71:22-10.0.0.1:45812.service - OpenSSH per-connection server daemon (10.0.0.1:45812). Jan 24 00:46:35.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.71:22-10.0.0.1:45812 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:46:35.333865 systemd[1]: sshd@22-10.0.0.71:22-10.0.0.1:45798.service: Deactivated successfully. Jan 24 00:46:35.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.71:22-10.0.0.1:45798 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:46:35.340698 systemd[1]: session-24.scope: Deactivated successfully. Jan 24 00:46:35.354678 systemd-logind[1585]: Session 24 logged out. Waiting for processes to exit. Jan 24 00:46:35.358374 systemd-logind[1585]: Removed session 24. Jan 24 00:46:35.489000 audit[5781]: USER_ACCT pid=5781 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:35.490573 sshd[5781]: Accepted publickey for core from 10.0.0.1 port 45812 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:46:35.490000 audit[5781]: CRED_ACQ pid=5781 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:35.490000 audit[5781]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd306bc10 a2=3 a3=0 items=0 ppid=1 pid=5781 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:46:35.490000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:46:35.496500 sshd-session[5781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:46:35.516942 systemd-logind[1585]: New session 25 of user core. Jan 24 00:46:35.533723 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 24 00:46:35.549000 audit[5781]: USER_START pid=5781 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:35.559000 audit[5788]: CRED_ACQ pid=5788 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:36.747944 containerd[1607]: time="2026-01-24T00:46:36.747577833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:46:36.750062 kubelet[2869]: E0124 00:46:36.749119 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-48xkv" podUID="985a1218-3c37-4f6d-aa83-5ce6fdad91a9" Jan 24 00:46:36.847833 containerd[1607]: time="2026-01-24T00:46:36.847121987Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:46:36.854379 containerd[1607]: time="2026-01-24T00:46:36.854225009Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:46:36.855569 containerd[1607]: time="2026-01-24T00:46:36.854736618Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:46:36.855638 kubelet[2869]: E0124 00:46:36.854941 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:46:36.855638 kubelet[2869]: E0124 00:46:36.854985 2869 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:46:36.855638 kubelet[2869]: E0124 00:46:36.855064 2869 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5594cdc7fb-c8l7f_calico-apiserver(050a17cf-0e04-46c0-ad64-4ce3987ef3d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:46:36.855638 kubelet[2869]: E0124 00:46:36.855103 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5594cdc7fb-c8l7f" podUID="050a17cf-0e04-46c0-ad64-4ce3987ef3d5" Jan 24 00:46:37.184000 audit[5803]: NETFILTER_CFG table=filter:131 family=2 entries=26 op=nft_register_rule pid=5803 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:46:37.184000 audit[5803]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffcb3a991d0 a2=0 a3=7ffcb3a991bc items=0 ppid=3027 pid=5803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:46:37.184000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:46:37.195000 audit[5803]: NETFILTER_CFG table=nat:132 family=2 entries=20 op=nft_register_rule pid=5803 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:46:37.195000 audit[5803]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffcb3a991d0 a2=0 a3=0 items=0 ppid=3027 pid=5803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:46:37.195000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:46:37.201618 sshd[5788]: Connection closed by 10.0.0.1 port 45812 Jan 24 00:46:37.205570 sshd-session[5781]: pam_unix(sshd:session): session closed for user core Jan 24 00:46:37.206000 audit[5781]: USER_END pid=5781 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:37.209000 audit[5781]: CRED_DISP pid=5781 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:37.219898 systemd[1]: sshd@23-10.0.0.71:22-10.0.0.1:45812.service: Deactivated successfully. Jan 24 00:46:37.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.71:22-10.0.0.1:45812 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:46:37.225078 systemd[1]: session-25.scope: Deactivated successfully. Jan 24 00:46:37.235536 systemd-logind[1585]: Session 25 logged out. Waiting for processes to exit. Jan 24 00:46:37.242591 systemd[1]: Started sshd@24-10.0.0.71:22-10.0.0.1:45826.service - OpenSSH per-connection server daemon (10.0.0.1:45826). Jan 24 00:46:37.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.71:22-10.0.0.1:45826 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:46:37.248996 systemd-logind[1585]: Removed session 25. Jan 24 00:46:37.524000 audit[5808]: USER_ACCT pid=5808 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:37.530000 audit[5808]: CRED_ACQ pid=5808 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:37.530000 audit[5808]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe7f1e78d0 a2=3 a3=0 items=0 ppid=1 pid=5808 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:46:37.530000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:46:37.536446 sshd[5808]: Accepted publickey for core from 10.0.0.1 port 45826 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:46:37.533628 sshd-session[5808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:46:37.550420 systemd-logind[1585]: New session 26 of user core. Jan 24 00:46:37.558601 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 24 00:46:37.576000 audit[5808]: USER_START pid=5808 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:37.583000 audit[5813]: CRED_ACQ pid=5813 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:38.300703 sshd[5813]: Connection closed by 10.0.0.1 port 45826 Jan 24 00:46:38.300000 audit[5823]: NETFILTER_CFG table=filter:133 family=2 entries=38 op=nft_register_rule pid=5823 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:46:38.300000 audit[5823]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff884d1410 a2=0 a3=7fff884d13fc items=0 ppid=3027 pid=5823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:46:38.300000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:46:38.302608 sshd-session[5808]: pam_unix(sshd:session): session closed for user core Jan 24 00:46:38.306000 audit[5823]: NETFILTER_CFG table=nat:134 family=2 entries=20 op=nft_register_rule pid=5823 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:46:38.306000 audit[5823]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff884d1410 a2=0 a3=0 items=0 ppid=3027 pid=5823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:46:38.306000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:46:38.307000 audit[5808]: USER_END pid=5808 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:38.307000 audit[5808]: CRED_DISP pid=5808 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:38.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.71:22-10.0.0.1:45830 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:46:38.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.71:22-10.0.0.1:45826 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:46:38.324458 systemd[1]: Started sshd@25-10.0.0.71:22-10.0.0.1:45830.service - OpenSSH per-connection server daemon (10.0.0.1:45830). Jan 24 00:46:38.325727 systemd[1]: sshd@24-10.0.0.71:22-10.0.0.1:45826.service: Deactivated successfully. Jan 24 00:46:38.338033 systemd[1]: session-26.scope: Deactivated successfully. Jan 24 00:46:38.349121 systemd-logind[1585]: Session 26 logged out. Waiting for processes to exit. Jan 24 00:46:38.356657 systemd-logind[1585]: Removed session 26. Jan 24 00:46:38.574712 kernel: kauditd_printk_skb: 47 callbacks suppressed Jan 24 00:46:38.574855 kernel: audit: type=1101 audit(1769215598.546:905): pid=5834 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:38.546000 audit[5834]: USER_ACCT pid=5834 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:38.579436 sshd[5834]: Accepted publickey for core from 10.0.0.1 port 45830 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:46:38.567722 systemd-logind[1585]: New session 27 of user core. Jan 24 00:46:38.551125 sshd-session[5834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:46:38.547000 audit[5834]: CRED_ACQ pid=5834 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:38.657675 kernel: audit: type=1103 audit(1769215598.547:906): pid=5834 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:38.658069 kernel: audit: type=1006 audit(1769215598.547:907): pid=5834 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 24 00:46:38.547000 audit[5834]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff9ef2d910 a2=3 a3=0 items=0 ppid=1 pid=5834 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:46:38.660681 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 24 00:46:38.700104 kernel: audit: type=1300 audit(1769215598.547:907): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff9ef2d910 a2=3 a3=0 items=0 ppid=1 pid=5834 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:46:38.751104 kernel: audit: type=1327 audit(1769215598.547:907): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:46:38.547000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:46:38.672000 audit[5834]: USER_START pid=5834 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:38.812903 kernel: audit: type=1105 audit(1769215598.672:908): pid=5834 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:38.679000 audit[5859]: CRED_ACQ pid=5859 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:38.880415 kernel: audit: type=1103 audit(1769215598.679:909): pid=5859 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:39.164534 sshd[5859]: Connection closed by 10.0.0.1 port 45830 Jan 24 00:46:39.167372 sshd-session[5834]: pam_unix(sshd:session): session closed for user core Jan 24 00:46:39.172000 audit[5834]: USER_END pid=5834 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:39.190749 systemd[1]: sshd@25-10.0.0.71:22-10.0.0.1:45830.service: Deactivated successfully. Jan 24 00:46:39.201852 systemd[1]: session-27.scope: Deactivated successfully. Jan 24 00:46:39.210788 systemd-logind[1585]: Session 27 logged out. Waiting for processes to exit. Jan 24 00:46:39.215056 systemd-logind[1585]: Removed session 27. Jan 24 00:46:39.243579 kernel: audit: type=1106 audit(1769215599.172:910): pid=5834 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:39.172000 audit[5834]: CRED_DISP pid=5834 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:39.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.71:22-10.0.0.1:45830 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:46:39.344565 kernel: audit: type=1104 audit(1769215599.172:911): pid=5834 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:39.344697 kernel: audit: type=1131 audit(1769215599.196:912): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.71:22-10.0.0.1:45830 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:46:39.750826 containerd[1607]: time="2026-01-24T00:46:39.750623482Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:46:39.857440 containerd[1607]: time="2026-01-24T00:46:39.856951795Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:46:39.860867 containerd[1607]: time="2026-01-24T00:46:39.860099073Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:46:39.860991 kubelet[2869]: E0124 00:46:39.860716 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:46:39.860991 kubelet[2869]: E0124 00:46:39.860771 2869 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:46:39.860991 kubelet[2869]: E0124 00:46:39.860849 2869 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5f8f47959d-9fk7m_calico-system(e041bbba-486b-4bf8-b212-ca4fbb2d4a57): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:46:39.864014 containerd[1607]: time="2026-01-24T00:46:39.861706955Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 24 00:46:39.864014 containerd[1607]: time="2026-01-24T00:46:39.863692663Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:46:39.959587 containerd[1607]: time="2026-01-24T00:46:39.958380951Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:46:39.964647 containerd[1607]: time="2026-01-24T00:46:39.964602886Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:46:39.964865 containerd[1607]: time="2026-01-24T00:46:39.964841673Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 24 00:46:39.967532 kubelet[2869]: E0124 00:46:39.967437 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:46:39.967532 kubelet[2869]: E0124 00:46:39.967504 2869 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:46:39.967988 kubelet[2869]: E0124 00:46:39.967593 2869 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5f8f47959d-9fk7m_calico-system(e041bbba-486b-4bf8-b212-ca4fbb2d4a57): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:46:39.967988 kubelet[2869]: E0124 00:46:39.967644 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f8f47959d-9fk7m" podUID="e041bbba-486b-4bf8-b212-ca4fbb2d4a57" Jan 24 00:46:40.743400 kubelet[2869]: E0124 00:46:40.742965 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f7d444f9d-54g8g" podUID="f244c052-aa71-4ccd-aaea-117d2939edf5" Jan 24 00:46:42.747423 kubelet[2869]: E0124 00:46:42.745762 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:46:42.747423 kubelet[2869]: E0124 00:46:42.746874 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:46:44.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.71:22-10.0.0.1:40500 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:46:44.204578 systemd[1]: Started sshd@26-10.0.0.71:22-10.0.0.1:40500.service - OpenSSH per-connection server daemon (10.0.0.1:40500). Jan 24 00:46:44.252407 kernel: audit: type=1130 audit(1769215604.203:913): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.71:22-10.0.0.1:40500 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:46:44.455000 audit[5878]: USER_ACCT pid=5878 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:44.466494 sshd-session[5878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:46:44.491703 kernel: audit: type=1101 audit(1769215604.455:914): pid=5878 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:44.491749 sshd[5878]: Accepted publickey for core from 10.0.0.1 port 40500 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:46:44.460000 audit[5878]: CRED_ACQ pid=5878 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:44.521528 systemd-logind[1585]: New session 28 of user core. Jan 24 00:46:44.558101 kernel: audit: type=1103 audit(1769215604.460:915): pid=5878 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:44.558519 kernel: audit: type=1006 audit(1769215604.460:916): pid=5878 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jan 24 00:46:44.559815 kernel: audit: type=1300 audit(1769215604.460:916): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc27b1c0a0 a2=3 a3=0 items=0 ppid=1 pid=5878 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:46:44.460000 audit[5878]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc27b1c0a0 a2=3 a3=0 items=0 ppid=1 pid=5878 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:46:44.598567 kernel: audit: type=1327 audit(1769215604.460:916): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:46:44.460000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:46:44.595043 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 24 00:46:44.617000 audit[5878]: USER_START pid=5878 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:44.682533 kernel: audit: type=1105 audit(1769215604.617:917): pid=5878 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:44.640000 audit[5895]: CRED_ACQ pid=5895 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:44.713431 kernel: audit: type=1103 audit(1769215604.640:918): pid=5895 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:45.013758 sshd[5895]: Connection closed by 10.0.0.1 port 40500 Jan 24 00:46:45.016602 sshd-session[5878]: pam_unix(sshd:session): session closed for user core Jan 24 00:46:45.019000 audit[5878]: USER_END pid=5878 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:45.025093 systemd-logind[1585]: Session 28 logged out. Waiting for processes to exit. Jan 24 00:46:45.032942 systemd[1]: sshd@26-10.0.0.71:22-10.0.0.1:40500.service: Deactivated successfully. Jan 24 00:46:45.047562 systemd[1]: session-28.scope: Deactivated successfully. Jan 24 00:46:45.053709 systemd-logind[1585]: Removed session 28. Jan 24 00:46:45.019000 audit[5878]: CRED_DISP pid=5878 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:45.099590 kernel: audit: type=1106 audit(1769215605.019:919): pid=5878 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:45.099720 kernel: audit: type=1104 audit(1769215605.019:920): pid=5878 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:45.034000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.71:22-10.0.0.1:40500 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:46:46.747066 containerd[1607]: time="2026-01-24T00:46:46.746973506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:46:46.841303 containerd[1607]: time="2026-01-24T00:46:46.833107816Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:46:46.846076 containerd[1607]: time="2026-01-24T00:46:46.843265578Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 24 00:46:46.846076 containerd[1607]: time="2026-01-24T00:46:46.843338756Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:46:46.846493 kubelet[2869]: E0124 00:46:46.843677 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:46:46.846493 kubelet[2869]: E0124 00:46:46.843729 2869 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:46:46.846493 kubelet[2869]: E0124 00:46:46.843818 2869 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-j2nlt_calico-system(0329b08b-e4ed-4b35-88d7-60baae652219): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:46:46.846493 kubelet[2869]: E0124 00:46:46.843858 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-j2nlt" podUID="0329b08b-e4ed-4b35-88d7-60baae652219" Jan 24 00:46:48.748298 containerd[1607]: time="2026-01-24T00:46:48.747895819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:46:48.833918 containerd[1607]: time="2026-01-24T00:46:48.833630541Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:46:48.848925 containerd[1607]: time="2026-01-24T00:46:48.842645578Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:46:48.848925 containerd[1607]: time="2026-01-24T00:46:48.842755364Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:46:48.864033 kubelet[2869]: E0124 00:46:48.860063 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:46:48.864033 kubelet[2869]: E0124 00:46:48.860122 2869 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:46:48.874876 kubelet[2869]: E0124 00:46:48.873056 2869 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5594cdc7fb-7kk8c_calico-apiserver(5a82bd01-5299-411a-9329-279ee1a3e6ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:46:48.874876 kubelet[2869]: E0124 00:46:48.873110 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5594cdc7fb-7kk8c" podUID="5a82bd01-5299-411a-9329-279ee1a3e6ef" Jan 24 00:46:50.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.71:22-10.0.0.1:40502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:46:50.054674 systemd[1]: Started sshd@27-10.0.0.71:22-10.0.0.1:40502.service - OpenSSH per-connection server daemon (10.0.0.1:40502). Jan 24 00:46:50.073295 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:46:50.073483 kernel: audit: type=1130 audit(1769215610.053:922): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.71:22-10.0.0.1:40502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:46:50.262000 audit[5924]: USER_ACCT pid=5924 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:50.265348 sshd[5924]: Accepted publickey for core from 10.0.0.1 port 40502 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:46:50.270678 sshd-session[5924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:46:50.266000 audit[5924]: CRED_ACQ pid=5924 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:50.310474 systemd-logind[1585]: New session 29 of user core. Jan 24 00:46:50.343482 kernel: audit: type=1101 audit(1769215610.262:923): pid=5924 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:50.343650 kernel: audit: type=1103 audit(1769215610.266:924): pid=5924 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:50.365627 kernel: audit: type=1006 audit(1769215610.266:925): pid=5924 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Jan 24 00:46:50.365750 kernel: audit: type=1300 audit(1769215610.266:925): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcf303bd20 a2=3 a3=0 items=0 ppid=1 pid=5924 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:46:50.266000 audit[5924]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcf303bd20 a2=3 a3=0 items=0 ppid=1 pid=5924 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:46:50.266000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:46:50.430828 kernel: audit: type=1327 audit(1769215610.266:925): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:46:50.436842 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 24 00:46:50.448000 audit[5924]: USER_START pid=5924 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:50.462000 audit[5929]: CRED_ACQ pid=5929 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:50.517797 kernel: audit: type=1105 audit(1769215610.448:926): pid=5924 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:50.517912 kernel: audit: type=1103 audit(1769215610.462:927): pid=5929 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:50.748110 kubelet[2869]: E0124 00:46:50.745899 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5594cdc7fb-c8l7f" podUID="050a17cf-0e04-46c0-ad64-4ce3987ef3d5" Jan 24 00:46:50.749530 containerd[1607]: time="2026-01-24T00:46:50.746896041Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:46:50.762299 sshd[5929]: Connection closed by 10.0.0.1 port 40502 Jan 24 00:46:50.763043 sshd-session[5924]: pam_unix(sshd:session): session closed for user core Jan 24 00:46:50.765847 kubelet[2869]: E0124 00:46:50.765729 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f8f47959d-9fk7m" podUID="e041bbba-486b-4bf8-b212-ca4fbb2d4a57" Jan 24 00:46:50.777000 audit[5924]: USER_END pid=5924 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:50.791357 systemd-logind[1585]: Session 29 logged out. Waiting for processes to exit. Jan 24 00:46:50.792834 systemd[1]: sshd@27-10.0.0.71:22-10.0.0.1:40502.service: Deactivated successfully. Jan 24 00:46:50.800091 systemd[1]: session-29.scope: Deactivated successfully. Jan 24 00:46:50.805718 systemd-logind[1585]: Removed session 29. Jan 24 00:46:50.780000 audit[5924]: CRED_DISP pid=5924 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:50.874915 kernel: audit: type=1106 audit(1769215610.777:928): pid=5924 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:50.875047 kernel: audit: type=1104 audit(1769215610.780:929): pid=5924 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:50.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.71:22-10.0.0.1:40502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:46:50.893700 containerd[1607]: time="2026-01-24T00:46:50.891567584Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:46:50.896653 containerd[1607]: time="2026-01-24T00:46:50.896506840Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:46:50.896653 containerd[1607]: time="2026-01-24T00:46:50.896605094Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 24 00:46:50.897488 kubelet[2869]: E0124 00:46:50.897284 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:46:50.897488 kubelet[2869]: E0124 00:46:50.897341 2869 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:46:50.897630 kubelet[2869]: E0124 00:46:50.897543 2869 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-48xkv_calico-system(985a1218-3c37-4f6d-aa83-5ce6fdad91a9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:46:50.901310 containerd[1607]: time="2026-01-24T00:46:50.901117954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:46:50.987371 containerd[1607]: time="2026-01-24T00:46:50.985760355Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:46:50.994224 containerd[1607]: time="2026-01-24T00:46:50.993095240Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:46:50.994704 containerd[1607]: time="2026-01-24T00:46:50.994577044Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 24 00:46:51.004535 kubelet[2869]: E0124 00:46:50.999114 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:46:51.004535 kubelet[2869]: E0124 00:46:50.999303 2869 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:46:51.004535 kubelet[2869]: E0124 00:46:50.999387 2869 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-48xkv_calico-system(985a1218-3c37-4f6d-aa83-5ce6fdad91a9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:46:51.004535 kubelet[2869]: E0124 00:46:50.999522 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-48xkv" podUID="985a1218-3c37-4f6d-aa83-5ce6fdad91a9" Jan 24 00:46:52.746575 containerd[1607]: time="2026-01-24T00:46:52.746526853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:46:52.835374 containerd[1607]: time="2026-01-24T00:46:52.835320385Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:46:52.839082 containerd[1607]: time="2026-01-24T00:46:52.838537849Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:46:52.839407 containerd[1607]: time="2026-01-24T00:46:52.838990363Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 24 00:46:52.840616 kubelet[2869]: E0124 00:46:52.839760 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:46:52.840616 kubelet[2869]: E0124 00:46:52.839889 2869 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:46:52.840616 kubelet[2869]: E0124 00:46:52.840124 2869 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5f7d444f9d-54g8g_calico-system(f244c052-aa71-4ccd-aaea-117d2939edf5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:46:52.840616 kubelet[2869]: E0124 00:46:52.840310 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f7d444f9d-54g8g" podUID="f244c052-aa71-4ccd-aaea-117d2939edf5" Jan 24 00:46:55.547000 audit[5943]: NETFILTER_CFG table=filter:135 family=2 entries=26 op=nft_register_rule pid=5943 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:46:55.556784 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:46:55.557080 kernel: audit: type=1325 audit(1769215615.547:931): table=filter:135 family=2 entries=26 op=nft_register_rule pid=5943 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:46:55.547000 audit[5943]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc10e0f910 a2=0 a3=7ffc10e0f8fc items=0 ppid=3027 pid=5943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:46:55.617374 kernel: audit: type=1300 audit(1769215615.547:931): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc10e0f910 a2=0 a3=7ffc10e0f8fc items=0 ppid=3027 pid=5943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:46:55.617556 kernel: audit: type=1327 audit(1769215615.547:931): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:46:55.547000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:46:55.634839 kernel: audit: type=1325 audit(1769215615.625:932): table=nat:136 family=2 entries=104 op=nft_register_chain pid=5943 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:46:55.625000 audit[5943]: NETFILTER_CFG table=nat:136 family=2 entries=104 op=nft_register_chain pid=5943 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:46:55.625000 audit[5943]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffc10e0f910 a2=0 a3=7ffc10e0f8fc items=0 ppid=3027 pid=5943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:46:55.708080 kernel: audit: type=1300 audit(1769215615.625:932): arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffc10e0f910 a2=0 a3=7ffc10e0f8fc items=0 ppid=3027 pid=5943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:46:55.625000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:46:55.728438 kernel: audit: type=1327 audit(1769215615.625:932): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:46:55.790105 systemd[1]: Started sshd@28-10.0.0.71:22-10.0.0.1:33678.service - OpenSSH per-connection server daemon (10.0.0.1:33678). Jan 24 00:46:55.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.71:22-10.0.0.1:33678 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:46:55.820406 kernel: audit: type=1130 audit(1769215615.789:933): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.71:22-10.0.0.1:33678 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:46:55.970852 sshd[5945]: Accepted publickey for core from 10.0.0.1 port 33678 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:46:55.969000 audit[5945]: USER_ACCT pid=5945 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:55.976912 sshd-session[5945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:46:55.997580 systemd-logind[1585]: New session 30 of user core. Jan 24 00:46:55.973000 audit[5945]: CRED_ACQ pid=5945 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:56.037974 kernel: audit: type=1101 audit(1769215615.969:934): pid=5945 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:56.038082 kernel: audit: type=1103 audit(1769215615.973:935): pid=5945 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:55.973000 audit[5945]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce2ea4040 a2=3 a3=0 items=0 ppid=1 pid=5945 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:46:55.973000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:46:56.061417 kernel: audit: type=1006 audit(1769215615.973:936): pid=5945 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=30 res=1 Jan 24 00:46:56.064117 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 24 00:46:56.073000 audit[5945]: USER_START pid=5945 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:56.078000 audit[5949]: CRED_ACQ pid=5949 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:56.329920 sshd[5949]: Connection closed by 10.0.0.1 port 33678 Jan 24 00:46:56.339442 sshd-session[5945]: pam_unix(sshd:session): session closed for user core Jan 24 00:46:56.344000 audit[5945]: USER_END pid=5945 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:56.344000 audit[5945]: CRED_DISP pid=5945 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:46:56.368445 systemd[1]: sshd@28-10.0.0.71:22-10.0.0.1:33678.service: Deactivated successfully. Jan 24 00:46:56.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.71:22-10.0.0.1:33678 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:46:56.376075 systemd[1]: session-30.scope: Deactivated successfully. Jan 24 00:46:56.382560 systemd-logind[1585]: Session 30 logged out. Waiting for processes to exit. Jan 24 00:46:56.385436 systemd-logind[1585]: Removed session 30. Jan 24 00:46:57.747591 kubelet[2869]: E0124 00:46:57.747440 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-j2nlt" podUID="0329b08b-e4ed-4b35-88d7-60baae652219" Jan 24 00:47:01.588352 systemd[1]: Started sshd@29-10.0.0.71:22-10.0.0.1:33680.service - OpenSSH per-connection server daemon (10.0.0.1:33680). Jan 24 00:47:01.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.71:22-10.0.0.1:33680 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:47:01.608466 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 24 00:47:01.609888 kernel: audit: type=1130 audit(1769215621.587:942): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.71:22-10.0.0.1:33680 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:47:01.751935 kubelet[2869]: E0124 00:47:01.751465 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5594cdc7fb-7kk8c" podUID="5a82bd01-5299-411a-9329-279ee1a3e6ef" Jan 24 00:47:02.445000 audit[5962]: USER_ACCT pid=5962 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:47:02.450921 sshd[5962]: Accepted publickey for core from 10.0.0.1 port 33680 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:47:02.484857 sshd-session[5962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:47:02.492075 kernel: audit: type=1101 audit(1769215622.445:943): pid=5962 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:47:02.495439 kernel: audit: type=1103 audit(1769215622.461:944): pid=5962 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:47:02.461000 audit[5962]: CRED_ACQ pid=5962 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:47:02.559388 kernel: audit: type=1006 audit(1769215622.475:945): pid=5962 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=31 res=1 Jan 24 00:47:02.561014 kernel: audit: type=1300 audit(1769215622.475:945): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc2c2a7230 a2=3 a3=0 items=0 ppid=1 pid=5962 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:47:02.475000 audit[5962]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc2c2a7230 a2=3 a3=0 items=0 ppid=1 pid=5962 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:47:02.562354 systemd-logind[1585]: New session 31 of user core. Jan 24 00:47:02.475000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:47:02.601944 kernel: audit: type=1327 audit(1769215622.475:945): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:47:02.614037 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 24 00:47:02.645000 audit[5962]: USER_START pid=5962 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:47:02.694650 kernel: audit: type=1105 audit(1769215622.645:946): pid=5962 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:47:02.713509 kernel: audit: type=1103 audit(1769215622.654:947): pid=5966 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:47:02.654000 audit[5966]: CRED_ACQ pid=5966 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:47:02.799660 kubelet[2869]: E0124 00:47:02.789830 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f8f47959d-9fk7m" podUID="e041bbba-486b-4bf8-b212-ca4fbb2d4a57" Jan 24 00:47:03.363478 sshd[5966]: Connection closed by 10.0.0.1 port 33680 Jan 24 00:47:03.365066 sshd-session[5962]: pam_unix(sshd:session): session closed for user core Jan 24 00:47:03.371000 audit[5962]: USER_END pid=5962 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:47:03.379617 systemd-logind[1585]: Session 31 logged out. Waiting for processes to exit. Jan 24 00:47:03.379954 systemd[1]: sshd@29-10.0.0.71:22-10.0.0.1:33680.service: Deactivated successfully. Jan 24 00:47:03.387793 systemd[1]: session-31.scope: Deactivated successfully. Jan 24 00:47:03.394998 systemd-logind[1585]: Removed session 31. Jan 24 00:47:03.371000 audit[5962]: CRED_DISP pid=5962 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:47:03.412685 kernel: audit: type=1106 audit(1769215623.371:948): pid=5962 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:47:03.412732 kernel: audit: type=1104 audit(1769215623.371:949): pid=5962 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:47:03.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.71:22-10.0.0.1:33680 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:47:03.758801 kubelet[2869]: E0124 00:47:03.756090 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f7d444f9d-54g8g" podUID="f244c052-aa71-4ccd-aaea-117d2939edf5" Jan 24 00:47:04.759836 kubelet[2869]: E0124 00:47:04.759759 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-48xkv" podUID="985a1218-3c37-4f6d-aa83-5ce6fdad91a9" Jan 24 00:47:04.762891 kubelet[2869]: E0124 00:47:04.762810 2869 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5594cdc7fb-c8l7f" podUID="050a17cf-0e04-46c0-ad64-4ce3987ef3d5" Jan 24 00:47:04.857358 systemd[1714]: Created slice background.slice - User Background Tasks Slice. Jan 24 00:47:04.876418 systemd[1714]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories... Jan 24 00:47:04.943984 systemd[1714]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories. Jan 24 00:47:07.742713 kubelet[2869]: E0124 00:47:07.742624 2869 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:47:08.384810 systemd[1]: Started sshd@30-10.0.0.71:22-10.0.0.1:40156.service - OpenSSH per-connection server daemon (10.0.0.1:40156). Jan 24 00:47:08.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.71:22-10.0.0.1:40156 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:47:08.388481 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:47:08.388656 kernel: audit: type=1130 audit(1769215628.383:951): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.71:22-10.0.0.1:40156 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:47:08.498000 audit[6006]: USER_ACCT pid=6006 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:47:08.499980 sshd[6006]: Accepted publickey for core from 10.0.0.1 port 40156 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:47:08.504326 sshd-session[6006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:47:08.512973 systemd-logind[1585]: New session 32 of user core. Jan 24 00:47:08.501000 audit[6006]: CRED_ACQ pid=6006 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:47:08.538625 kernel: audit: type=1101 audit(1769215628.498:952): pid=6006 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:47:08.538723 kernel: audit: type=1103 audit(1769215628.501:953): pid=6006 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:47:08.549463 kernel: audit: type=1006 audit(1769215628.501:954): pid=6006 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=32 res=1 Jan 24 00:47:08.549730 kernel: audit: type=1300 audit(1769215628.501:954): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe0768aef0 a2=3 a3=0 items=0 ppid=1 pid=6006 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:47:08.501000 audit[6006]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe0768aef0 a2=3 a3=0 items=0 ppid=1 pid=6006 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:47:08.570235 kernel: audit: type=1327 audit(1769215628.501:954): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:47:08.501000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:47:08.578849 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 24 00:47:08.585000 audit[6006]: USER_START pid=6006 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:47:08.613297 kernel: audit: type=1105 audit(1769215628.585:955): pid=6006 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:47:08.588000 audit[6011]: CRED_ACQ pid=6011 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:47:08.637358 kernel: audit: type=1103 audit(1769215628.588:956): pid=6011 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:47:08.760910 sshd[6011]: Connection closed by 10.0.0.1 port 40156 Jan 24 00:47:08.762470 sshd-session[6006]: pam_unix(sshd:session): session closed for user core Jan 24 00:47:08.765000 audit[6006]: USER_END pid=6006 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:47:08.775875 systemd-logind[1585]: Session 32 logged out. Waiting for processes to exit. Jan 24 00:47:08.776877 systemd[1]: sshd@30-10.0.0.71:22-10.0.0.1:40156.service: Deactivated successfully. Jan 24 00:47:08.781883 systemd[1]: session-32.scope: Deactivated successfully. Jan 24 00:47:08.788454 systemd-logind[1585]: Removed session 32. Jan 24 00:47:08.765000 audit[6006]: CRED_DISP pid=6006 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:47:08.806250 kernel: audit: type=1106 audit(1769215628.765:957): pid=6006 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:47:08.806364 kernel: audit: type=1104 audit(1769215628.765:958): pid=6006 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:47:08.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.71:22-10.0.0.1:40156 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'