Jan 26 18:16:46.373572 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 26 15:51:16 -00 2026 Jan 26 18:16:46.373594 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7ccefddddc0421093e33229b6998deb24cdb3e69dcc9847e30d159fa75e66e9c Jan 26 18:16:46.373605 kernel: BIOS-provided physical RAM map: Jan 26 18:16:46.373611 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 26 18:16:46.373617 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 26 18:16:46.373623 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 26 18:16:46.373685 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 26 18:16:46.373692 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 26 18:16:46.373698 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 26 18:16:46.373704 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 26 18:16:46.373714 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 26 18:16:46.373720 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 26 18:16:46.373726 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 26 18:16:46.373732 kernel: NX (Execute Disable) protection: active Jan 26 18:16:46.373739 kernel: APIC: Static calls initialized Jan 26 18:16:46.373748 kernel: SMBIOS 2.8 present. Jan 26 18:16:46.373754 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 26 18:16:46.373761 kernel: DMI: Memory slots populated: 1/1 Jan 26 18:16:46.373767 kernel: Hypervisor detected: KVM Jan 26 18:16:46.373773 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 26 18:16:46.373780 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 26 18:16:46.373786 kernel: kvm-clock: using sched offset of 4146104581 cycles Jan 26 18:16:46.373793 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 26 18:16:46.373800 kernel: tsc: Detected 2445.426 MHz processor Jan 26 18:16:46.373807 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 26 18:16:46.373816 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 26 18:16:46.373822 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 26 18:16:46.373829 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 26 18:16:46.373836 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 26 18:16:46.373843 kernel: Using GB pages for direct mapping Jan 26 18:16:46.373849 kernel: ACPI: Early table checksum verification disabled Jan 26 18:16:46.373856 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 26 18:16:46.373865 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 26 18:16:46.373871 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 26 18:16:46.373878 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 26 18:16:46.373885 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 26 18:16:46.373891 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 26 18:16:46.373898 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 26 18:16:46.373905 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 26 18:16:46.373914 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 26 18:16:46.373924 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 26 18:16:46.373931 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 26 18:16:46.373938 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 26 18:16:46.373946 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 26 18:16:46.373953 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 26 18:16:46.373960 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 26 18:16:46.373990 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 26 18:16:46.373996 kernel: No NUMA configuration found Jan 26 18:16:46.374004 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 26 18:16:46.374011 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jan 26 18:16:46.374018 kernel: Zone ranges: Jan 26 18:16:46.374047 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 26 18:16:46.374054 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 26 18:16:46.374061 kernel: Normal empty Jan 26 18:16:46.374068 kernel: Device empty Jan 26 18:16:46.374075 kernel: Movable zone start for each node Jan 26 18:16:46.374081 kernel: Early memory node ranges Jan 26 18:16:46.374088 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 26 18:16:46.374095 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 26 18:16:46.374104 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 26 18:16:46.374111 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 26 18:16:46.374137 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 26 18:16:46.374144 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 26 18:16:46.374151 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 26 18:16:46.374158 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 26 18:16:46.374165 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 26 18:16:46.374174 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 26 18:16:46.374181 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 26 18:16:46.374188 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 26 18:16:46.374195 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 26 18:16:46.374202 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 26 18:16:46.374209 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 26 18:16:46.374216 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 26 18:16:46.374225 kernel: TSC deadline timer available Jan 26 18:16:46.374232 kernel: CPU topo: Max. logical packages: 1 Jan 26 18:16:46.374239 kernel: CPU topo: Max. logical dies: 1 Jan 26 18:16:46.374246 kernel: CPU topo: Max. dies per package: 1 Jan 26 18:16:46.374252 kernel: CPU topo: Max. threads per core: 1 Jan 26 18:16:46.374259 kernel: CPU topo: Num. cores per package: 4 Jan 26 18:16:46.374266 kernel: CPU topo: Num. threads per package: 4 Jan 26 18:16:46.374273 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 26 18:16:46.374282 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 26 18:16:46.374289 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 26 18:16:46.374296 kernel: kvm-guest: setup PV sched yield Jan 26 18:16:46.374302 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 26 18:16:46.374309 kernel: Booting paravirtualized kernel on KVM Jan 26 18:16:46.374316 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 26 18:16:46.374324 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 26 18:16:46.374333 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 26 18:16:46.374340 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 26 18:16:46.374346 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 26 18:16:46.374353 kernel: kvm-guest: PV spinlocks enabled Jan 26 18:16:46.374360 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 26 18:16:46.374368 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7ccefddddc0421093e33229b6998deb24cdb3e69dcc9847e30d159fa75e66e9c Jan 26 18:16:46.374375 kernel: random: crng init done Jan 26 18:16:46.374384 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 26 18:16:46.374391 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 26 18:16:46.374398 kernel: Fallback order for Node 0: 0 Jan 26 18:16:46.374405 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jan 26 18:16:46.374439 kernel: Policy zone: DMA32 Jan 26 18:16:46.374447 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 26 18:16:46.374454 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 26 18:16:46.374464 kernel: ftrace: allocating 40128 entries in 157 pages Jan 26 18:16:46.374471 kernel: ftrace: allocated 157 pages with 5 groups Jan 26 18:16:46.374478 kernel: Dynamic Preempt: voluntary Jan 26 18:16:46.374484 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 26 18:16:46.374492 kernel: rcu: RCU event tracing is enabled. Jan 26 18:16:46.374499 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 26 18:16:46.374506 kernel: Trampoline variant of Tasks RCU enabled. Jan 26 18:16:46.374513 kernel: Rude variant of Tasks RCU enabled. Jan 26 18:16:46.374523 kernel: Tracing variant of Tasks RCU enabled. Jan 26 18:16:46.374529 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 26 18:16:46.374536 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 26 18:16:46.374543 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 26 18:16:46.374550 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 26 18:16:46.374557 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 26 18:16:46.374564 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 26 18:16:46.374573 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 26 18:16:46.374587 kernel: Console: colour VGA+ 80x25 Jan 26 18:16:46.374596 kernel: printk: legacy console [ttyS0] enabled Jan 26 18:16:46.374603 kernel: ACPI: Core revision 20240827 Jan 26 18:16:46.374610 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 26 18:16:46.374618 kernel: APIC: Switch to symmetric I/O mode setup Jan 26 18:16:46.374625 kernel: x2apic enabled Jan 26 18:16:46.374679 kernel: APIC: Switched APIC routing to: physical x2apic Jan 26 18:16:46.374687 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 26 18:16:46.374698 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 26 18:16:46.374706 kernel: kvm-guest: setup PV IPIs Jan 26 18:16:46.374713 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 26 18:16:46.374720 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 26 18:16:46.374730 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 26 18:16:46.374737 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 26 18:16:46.374744 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 26 18:16:46.374752 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 26 18:16:46.374759 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 26 18:16:46.374766 kernel: Spectre V2 : Mitigation: Retpolines Jan 26 18:16:46.374774 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 26 18:16:46.374783 kernel: Speculative Store Bypass: Vulnerable Jan 26 18:16:46.374790 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 26 18:16:46.374798 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 26 18:16:46.374805 kernel: active return thunk: srso_alias_return_thunk Jan 26 18:16:46.374813 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 26 18:16:46.374820 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 26 18:16:46.374827 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 26 18:16:46.374837 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 26 18:16:46.374844 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 26 18:16:46.374852 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 26 18:16:46.374859 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 26 18:16:46.374866 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 26 18:16:46.374874 kernel: Freeing SMP alternatives memory: 32K Jan 26 18:16:46.374881 kernel: pid_max: default: 32768 minimum: 301 Jan 26 18:16:46.374890 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 26 18:16:46.374897 kernel: landlock: Up and running. Jan 26 18:16:46.374905 kernel: SELinux: Initializing. Jan 26 18:16:46.374912 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 26 18:16:46.374919 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 26 18:16:46.374927 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 26 18:16:46.374934 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 26 18:16:46.374943 kernel: signal: max sigframe size: 1776 Jan 26 18:16:46.374951 kernel: rcu: Hierarchical SRCU implementation. Jan 26 18:16:46.374958 kernel: rcu: Max phase no-delay instances is 400. Jan 26 18:16:46.374965 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 26 18:16:46.374972 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 26 18:16:46.374980 kernel: smp: Bringing up secondary CPUs ... Jan 26 18:16:46.374987 kernel: smpboot: x86: Booting SMP configuration: Jan 26 18:16:46.374996 kernel: .... node #0, CPUs: #1 #2 #3 Jan 26 18:16:46.375003 kernel: smp: Brought up 1 node, 4 CPUs Jan 26 18:16:46.375010 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 26 18:16:46.375018 kernel: Memory: 2445296K/2571752K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15536K init, 2500K bss, 120520K reserved, 0K cma-reserved) Jan 26 18:16:46.375025 kernel: devtmpfs: initialized Jan 26 18:16:46.375032 kernel: x86/mm: Memory block size: 128MB Jan 26 18:16:46.375040 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 26 18:16:46.375049 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 26 18:16:46.375056 kernel: pinctrl core: initialized pinctrl subsystem Jan 26 18:16:46.375063 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 26 18:16:46.375070 kernel: audit: initializing netlink subsys (disabled) Jan 26 18:16:46.375077 kernel: audit: type=2000 audit(1769451402.871:1): state=initialized audit_enabled=0 res=1 Jan 26 18:16:46.375084 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 26 18:16:46.375092 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 26 18:16:46.375101 kernel: cpuidle: using governor menu Jan 26 18:16:46.375108 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 26 18:16:46.375115 kernel: dca service started, version 1.12.1 Jan 26 18:16:46.375123 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 26 18:16:46.375130 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 26 18:16:46.375137 kernel: PCI: Using configuration type 1 for base access Jan 26 18:16:46.375144 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 26 18:16:46.375153 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 26 18:16:46.375161 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 26 18:16:46.375168 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 26 18:16:46.375175 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 26 18:16:46.375182 kernel: ACPI: Added _OSI(Module Device) Jan 26 18:16:46.375189 kernel: ACPI: Added _OSI(Processor Device) Jan 26 18:16:46.375196 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 26 18:16:46.375203 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 26 18:16:46.375213 kernel: ACPI: Interpreter enabled Jan 26 18:16:46.375220 kernel: ACPI: PM: (supports S0 S3 S5) Jan 26 18:16:46.375227 kernel: ACPI: Using IOAPIC for interrupt routing Jan 26 18:16:46.375234 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 26 18:16:46.375241 kernel: PCI: Using E820 reservations for host bridge windows Jan 26 18:16:46.375248 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 26 18:16:46.375256 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 26 18:16:46.375533 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 26 18:16:46.375777 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 26 18:16:46.375956 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 26 18:16:46.375966 kernel: PCI host bridge to bus 0000:00 Jan 26 18:16:46.376137 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 26 18:16:46.376302 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 26 18:16:46.376500 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 26 18:16:46.376733 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 26 18:16:46.376894 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 26 18:16:46.377048 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 26 18:16:46.377202 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 26 18:16:46.377392 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 26 18:16:46.377614 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 26 18:16:46.377850 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 26 18:16:46.378020 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 26 18:16:46.378185 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 26 18:16:46.378354 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 26 18:16:46.378569 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 26 18:16:46.378797 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jan 26 18:16:46.378966 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 26 18:16:46.379133 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 26 18:16:46.379307 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 26 18:16:46.379517 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jan 26 18:16:46.379786 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 26 18:16:46.380003 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 26 18:16:46.380184 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 26 18:16:46.380353 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jan 26 18:16:46.380569 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jan 26 18:16:46.380796 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 26 18:16:46.380966 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 26 18:16:46.381140 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 26 18:16:46.381306 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 26 18:16:46.381586 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 26 18:16:46.381834 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jan 26 18:16:46.382004 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jan 26 18:16:46.382179 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 26 18:16:46.382350 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 26 18:16:46.382361 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 26 18:16:46.382368 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 26 18:16:46.382380 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 26 18:16:46.382387 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 26 18:16:46.382394 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 26 18:16:46.382401 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 26 18:16:46.382408 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 26 18:16:46.382454 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 26 18:16:46.382462 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 26 18:16:46.382472 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 26 18:16:46.382479 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 26 18:16:46.382486 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 26 18:16:46.382494 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 26 18:16:46.382501 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 26 18:16:46.382508 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 26 18:16:46.382515 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 26 18:16:46.382525 kernel: iommu: Default domain type: Translated Jan 26 18:16:46.382532 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 26 18:16:46.382539 kernel: PCI: Using ACPI for IRQ routing Jan 26 18:16:46.382547 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 26 18:16:46.382554 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 26 18:16:46.382561 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 26 18:16:46.382787 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 26 18:16:46.382961 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 26 18:16:46.383127 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 26 18:16:46.383137 kernel: vgaarb: loaded Jan 26 18:16:46.383144 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 26 18:16:46.383152 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 26 18:16:46.383159 kernel: clocksource: Switched to clocksource kvm-clock Jan 26 18:16:46.383166 kernel: VFS: Disk quotas dquot_6.6.0 Jan 26 18:16:46.383176 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 26 18:16:46.383184 kernel: pnp: PnP ACPI init Jan 26 18:16:46.383363 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 26 18:16:46.383374 kernel: pnp: PnP ACPI: found 6 devices Jan 26 18:16:46.383382 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 26 18:16:46.383389 kernel: NET: Registered PF_INET protocol family Jan 26 18:16:46.383399 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 26 18:16:46.383407 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 26 18:16:46.383455 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 26 18:16:46.383463 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 26 18:16:46.383471 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 26 18:16:46.383478 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 26 18:16:46.383485 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 26 18:16:46.383495 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 26 18:16:46.383502 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 26 18:16:46.383510 kernel: NET: Registered PF_XDP protocol family Jan 26 18:16:46.383731 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 26 18:16:46.383896 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 26 18:16:46.384053 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 26 18:16:46.384207 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 26 18:16:46.384367 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 26 18:16:46.384562 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 26 18:16:46.384574 kernel: PCI: CLS 0 bytes, default 64 Jan 26 18:16:46.384582 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 26 18:16:46.384590 kernel: Initialise system trusted keyrings Jan 26 18:16:46.384597 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 26 18:16:46.384604 kernel: Key type asymmetric registered Jan 26 18:16:46.384614 kernel: Asymmetric key parser 'x509' registered Jan 26 18:16:46.384622 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 26 18:16:46.384681 kernel: io scheduler mq-deadline registered Jan 26 18:16:46.384712 kernel: io scheduler kyber registered Jan 26 18:16:46.384720 kernel: io scheduler bfq registered Jan 26 18:16:46.384728 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 26 18:16:46.384735 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 26 18:16:46.384746 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 26 18:16:46.384753 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 26 18:16:46.384760 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 26 18:16:46.384768 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 26 18:16:46.384775 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 26 18:16:46.384783 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 26 18:16:46.384790 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 26 18:16:46.384972 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 26 18:16:46.384983 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 26 18:16:46.385144 kernel: rtc_cmos 00:04: registered as rtc0 Jan 26 18:16:46.385305 kernel: rtc_cmos 00:04: setting system clock to 2026-01-26T18:16:44 UTC (1769451404) Jan 26 18:16:46.385507 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 26 18:16:46.385519 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 26 18:16:46.385530 kernel: NET: Registered PF_INET6 protocol family Jan 26 18:16:46.385537 kernel: Segment Routing with IPv6 Jan 26 18:16:46.385545 kernel: In-situ OAM (IOAM) with IPv6 Jan 26 18:16:46.385552 kernel: NET: Registered PF_PACKET protocol family Jan 26 18:16:46.385559 kernel: Key type dns_resolver registered Jan 26 18:16:46.385566 kernel: IPI shorthand broadcast: enabled Jan 26 18:16:46.385574 kernel: sched_clock: Marking stable (2061021589, 370113487)->(2564795533, -133660457) Jan 26 18:16:46.385581 kernel: registered taskstats version 1 Jan 26 18:16:46.385591 kernel: Loading compiled-in X.509 certificates Jan 26 18:16:46.385598 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 3aafff36862946ad45897da10ba1e85c8fafc8e8' Jan 26 18:16:46.385605 kernel: Demotion targets for Node 0: null Jan 26 18:16:46.385613 kernel: Key type .fscrypt registered Jan 26 18:16:46.385620 kernel: Key type fscrypt-provisioning registered Jan 26 18:16:46.385627 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 26 18:16:46.385689 kernel: ima: Allocated hash algorithm: sha1 Jan 26 18:16:46.385696 kernel: ima: No architecture policies found Jan 26 18:16:46.385704 kernel: clk: Disabling unused clocks Jan 26 18:16:46.385711 kernel: Freeing unused kernel image (initmem) memory: 15536K Jan 26 18:16:46.385718 kernel: Write protecting the kernel read-only data: 47104k Jan 26 18:16:46.385726 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 26 18:16:46.385733 kernel: Run /init as init process Jan 26 18:16:46.385740 kernel: with arguments: Jan 26 18:16:46.385749 kernel: /init Jan 26 18:16:46.385757 kernel: with environment: Jan 26 18:16:46.385764 kernel: HOME=/ Jan 26 18:16:46.385771 kernel: TERM=linux Jan 26 18:16:46.385778 kernel: SCSI subsystem initialized Jan 26 18:16:46.385785 kernel: libata version 3.00 loaded. Jan 26 18:16:46.385960 kernel: ahci 0000:00:1f.2: version 3.0 Jan 26 18:16:46.385973 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 26 18:16:46.386139 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 26 18:16:46.386305 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 26 18:16:46.386509 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 26 18:16:46.386757 kernel: scsi host0: ahci Jan 26 18:16:46.386940 kernel: scsi host1: ahci Jan 26 18:16:46.387122 kernel: scsi host2: ahci Jan 26 18:16:46.387299 kernel: scsi host3: ahci Jan 26 18:16:46.387533 kernel: scsi host4: ahci Jan 26 18:16:46.387776 kernel: scsi host5: ahci Jan 26 18:16:46.387789 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Jan 26 18:16:46.387800 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Jan 26 18:16:46.387808 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Jan 26 18:16:46.387816 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Jan 26 18:16:46.387824 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Jan 26 18:16:46.387831 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Jan 26 18:16:46.387839 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 26 18:16:46.387846 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 26 18:16:46.387856 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 26 18:16:46.387863 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 26 18:16:46.387871 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 26 18:16:46.387878 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 26 18:16:46.387886 kernel: ata3.00: LPM support broken, forcing max_power Jan 26 18:16:46.387893 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 26 18:16:46.387901 kernel: ata3.00: applying bridge limits Jan 26 18:16:46.387911 kernel: ata3.00: LPM support broken, forcing max_power Jan 26 18:16:46.387918 kernel: ata3.00: configured for UDMA/100 Jan 26 18:16:46.388114 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 26 18:16:46.388328 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 26 18:16:46.388540 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 26 18:16:46.388551 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 26 18:16:46.388563 kernel: GPT:16515071 != 27000831 Jan 26 18:16:46.388570 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 26 18:16:46.388578 kernel: GPT:16515071 != 27000831 Jan 26 18:16:46.388585 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 26 18:16:46.388592 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 26 18:16:46.388835 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 26 18:16:46.388847 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 26 18:16:46.389035 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 26 18:16:46.389046 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 26 18:16:46.389053 kernel: device-mapper: uevent: version 1.0.3 Jan 26 18:16:46.389061 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 26 18:16:46.389069 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 26 18:16:46.389076 kernel: raid6: avx2x4 gen() 38504 MB/s Jan 26 18:16:46.389084 kernel: raid6: avx2x2 gen() 38428 MB/s Jan 26 18:16:46.389094 kernel: raid6: avx2x1 gen() 27407 MB/s Jan 26 18:16:46.389102 kernel: raid6: using algorithm avx2x4 gen() 38504 MB/s Jan 26 18:16:46.389109 kernel: raid6: .... xor() 4695 MB/s, rmw enabled Jan 26 18:16:46.389118 kernel: raid6: using avx2x2 recovery algorithm Jan 26 18:16:46.389126 kernel: xor: automatically using best checksumming function avx Jan 26 18:16:46.389134 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 26 18:16:46.389142 kernel: BTRFS: device fsid c78f7707-5c76-44ff-97b4-b1f791a94b1d devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (182) Jan 26 18:16:46.389152 kernel: BTRFS info (device dm-0): first mount of filesystem c78f7707-5c76-44ff-97b4-b1f791a94b1d Jan 26 18:16:46.389160 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 26 18:16:46.389167 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 26 18:16:46.389175 kernel: BTRFS info (device dm-0): enabling free space tree Jan 26 18:16:46.389185 kernel: loop: module loaded Jan 26 18:16:46.389193 kernel: loop0: detected capacity change from 0 to 100552 Jan 26 18:16:46.389200 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 26 18:16:46.389209 systemd[1]: Successfully made /usr/ read-only. Jan 26 18:16:46.389219 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 26 18:16:46.389227 systemd[1]: Detected virtualization kvm. Jan 26 18:16:46.389237 systemd[1]: Detected architecture x86-64. Jan 26 18:16:46.389245 systemd[1]: Running in initrd. Jan 26 18:16:46.389253 systemd[1]: No hostname configured, using default hostname. Jan 26 18:16:46.389261 systemd[1]: Hostname set to . Jan 26 18:16:46.389269 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 26 18:16:46.389277 systemd[1]: Queued start job for default target initrd.target. Jan 26 18:16:46.389285 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 26 18:16:46.389295 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 26 18:16:46.389303 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 26 18:16:46.389312 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 26 18:16:46.389320 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 26 18:16:46.389329 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 26 18:16:46.389339 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 26 18:16:46.389347 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 26 18:16:46.389355 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 26 18:16:46.389365 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 26 18:16:46.389373 systemd[1]: Reached target paths.target - Path Units. Jan 26 18:16:46.389381 systemd[1]: Reached target slices.target - Slice Units. Jan 26 18:16:46.389389 systemd[1]: Reached target swap.target - Swaps. Jan 26 18:16:46.389399 systemd[1]: Reached target timers.target - Timer Units. Jan 26 18:16:46.389407 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 26 18:16:46.389451 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 26 18:16:46.389459 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 26 18:16:46.389468 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 26 18:16:46.389476 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 26 18:16:46.389484 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 26 18:16:46.389494 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 26 18:16:46.389502 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 26 18:16:46.389510 systemd[1]: Reached target sockets.target - Socket Units. Jan 26 18:16:46.389518 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 26 18:16:46.389527 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 26 18:16:46.389534 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 26 18:16:46.389545 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 26 18:16:46.389553 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 26 18:16:46.389561 systemd[1]: Starting systemd-fsck-usr.service... Jan 26 18:16:46.389569 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 26 18:16:46.389577 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 26 18:16:46.389588 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 26 18:16:46.389620 systemd-journald[320]: Collecting audit messages is enabled. Jan 26 18:16:46.389693 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 26 18:16:46.389703 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 26 18:16:46.389711 kernel: audit: type=1130 audit(1769451406.382:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:46.389720 systemd-journald[320]: Journal started Jan 26 18:16:46.389737 systemd-journald[320]: Runtime Journal (/run/log/journal/670972c76d3b41518e746166fadcb565) is 6M, max 48.2M, 42.1M free. Jan 26 18:16:46.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:46.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:46.410692 kernel: audit: type=1130 audit(1769451406.401:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:46.410722 systemd[1]: Started systemd-journald.service - Journal Service. Jan 26 18:16:46.423528 kernel: audit: type=1130 audit(1769451406.413:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:46.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:46.414986 systemd[1]: Finished systemd-fsck-usr.service. Jan 26 18:16:46.437759 kernel: audit: type=1130 audit(1769451406.415:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:46.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:46.418979 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 26 18:16:46.614247 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 26 18:16:46.614282 kernel: Bridge firewalling registered Jan 26 18:16:46.445848 systemd-modules-load[323]: Inserted module 'br_netfilter' Jan 26 18:16:46.445874 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 26 18:16:46.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:46.618932 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 26 18:16:46.636302 kernel: audit: type=1130 audit(1769451406.625:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:46.633061 systemd-tmpfiles[335]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 26 18:16:46.653612 kernel: audit: type=1130 audit(1769451406.640:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:46.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:46.634541 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 26 18:16:46.653892 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 26 18:16:46.674031 kernel: audit: type=1130 audit(1769451406.660:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:46.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:46.674186 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 26 18:16:46.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:46.690746 kernel: audit: type=1130 audit(1769451406.681:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:46.692689 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 26 18:16:46.697532 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 26 18:16:46.717526 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 26 18:16:46.733196 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 26 18:16:46.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:46.743739 kernel: audit: type=1130 audit(1769451406.733:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:46.743881 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 26 18:16:46.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:46.754873 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 26 18:16:46.761472 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 26 18:16:46.765971 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 26 18:16:46.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:46.764000 audit: BPF prog-id=6 op=LOAD Jan 26 18:16:46.796356 dracut-cmdline[357]: dracut-109 Jan 26 18:16:46.803901 dracut-cmdline[357]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7ccefddddc0421093e33229b6998deb24cdb3e69dcc9847e30d159fa75e66e9c Jan 26 18:16:46.846893 systemd-resolved[359]: Positive Trust Anchors: Jan 26 18:16:46.846907 systemd-resolved[359]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 26 18:16:46.846912 systemd-resolved[359]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 26 18:16:46.846939 systemd-resolved[359]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 26 18:16:46.902244 systemd-resolved[359]: Defaulting to hostname 'linux'. Jan 26 18:16:46.906149 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 26 18:16:46.908317 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 26 18:16:46.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:46.983721 kernel: Loading iSCSI transport class v2.0-870. Jan 26 18:16:47.000703 kernel: iscsi: registered transport (tcp) Jan 26 18:16:47.024964 kernel: iscsi: registered transport (qla4xxx) Jan 26 18:16:47.025039 kernel: QLogic iSCSI HBA Driver Jan 26 18:16:47.058708 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 26 18:16:47.096399 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 26 18:16:47.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:47.107904 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 26 18:16:47.175767 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 26 18:16:47.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:47.181818 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 26 18:16:47.187509 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 26 18:16:47.226817 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 26 18:16:47.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:47.233000 audit: BPF prog-id=7 op=LOAD Jan 26 18:16:47.233000 audit: BPF prog-id=8 op=LOAD Jan 26 18:16:47.235269 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 26 18:16:47.294025 systemd-udevd[588]: Using default interface naming scheme 'v257'. Jan 26 18:16:47.315338 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 26 18:16:47.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:47.324715 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 26 18:16:47.361315 dracut-pre-trigger[666]: rd.md=0: removing MD RAID activation Jan 26 18:16:47.374585 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 26 18:16:47.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:47.374000 audit: BPF prog-id=9 op=LOAD Jan 26 18:16:47.378855 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 26 18:16:47.428126 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 26 18:16:47.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:47.434334 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 26 18:16:47.451516 systemd-networkd[711]: lo: Link UP Jan 26 18:16:47.451548 systemd-networkd[711]: lo: Gained carrier Jan 26 18:16:47.456379 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 26 18:16:47.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:47.463477 systemd[1]: Reached target network.target - Network. Jan 26 18:16:47.544105 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 26 18:16:47.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:47.554227 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 26 18:16:47.616235 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 26 18:16:47.676723 kernel: cryptd: max_cpu_qlen set to 1000 Jan 26 18:16:47.678538 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 26 18:16:47.699865 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 26 18:16:47.704764 systemd-networkd[711]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 26 18:16:47.704773 systemd-networkd[711]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 26 18:16:47.706515 systemd-networkd[711]: eth0: Link UP Jan 26 18:16:47.707269 systemd-networkd[711]: eth0: Gained carrier Jan 26 18:16:47.707280 systemd-networkd[711]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 26 18:16:47.720212 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 26 18:16:47.729383 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 26 18:16:47.774252 kernel: kauditd_printk_skb: 15 callbacks suppressed Jan 26 18:16:47.774283 kernel: audit: type=1131 audit(1769451407.748:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:47.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:47.731911 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 26 18:16:47.732127 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 26 18:16:47.748821 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 26 18:16:47.798757 kernel: AES CTR mode by8 optimization enabled Jan 26 18:16:47.802353 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 26 18:16:47.815947 systemd-networkd[711]: eth0: DHCPv4 address 10.0.0.64/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 26 18:16:47.834528 disk-uuid[797]: Primary Header is updated. Jan 26 18:16:47.834528 disk-uuid[797]: Secondary Entries is updated. Jan 26 18:16:47.834528 disk-uuid[797]: Secondary Header is updated. Jan 26 18:16:47.851693 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 26 18:16:47.944139 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 26 18:16:48.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:48.073473 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 26 18:16:48.087748 kernel: audit: type=1130 audit(1769451408.065:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:48.087774 kernel: audit: type=1130 audit(1769451408.076:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:48.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:48.089233 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 26 18:16:48.093468 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 26 18:16:48.097518 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 26 18:16:48.114621 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 26 18:16:48.157461 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 26 18:16:48.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:48.177735 kernel: audit: type=1130 audit(1769451408.161:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:48.739038 systemd-networkd[711]: eth0: Gained IPv6LL Jan 26 18:16:48.899216 disk-uuid[829]: Warning: The kernel is still using the old partition table. Jan 26 18:16:48.899216 disk-uuid[829]: The new table will be used at the next reboot or after you Jan 26 18:16:48.899216 disk-uuid[829]: run partprobe(8) or kpartx(8) Jan 26 18:16:48.899216 disk-uuid[829]: The operation has completed successfully. Jan 26 18:16:48.916052 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 26 18:16:48.916243 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 26 18:16:48.924237 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 26 18:16:48.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:48.922000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:48.939730 kernel: audit: type=1130 audit(1769451408.922:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:48.939761 kernel: audit: type=1131 audit(1769451408.922:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:48.990578 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (870) Jan 26 18:16:48.990615 kernel: BTRFS info (device vda6): first mount of filesystem 74befb1c-e259-44ed-b7ad-b49e24b3bbbe Jan 26 18:16:48.990681 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 26 18:16:49.003672 kernel: BTRFS info (device vda6): turning on async discard Jan 26 18:16:49.003703 kernel: BTRFS info (device vda6): enabling free space tree Jan 26 18:16:49.015771 kernel: BTRFS info (device vda6): last unmount of filesystem 74befb1c-e259-44ed-b7ad-b49e24b3bbbe Jan 26 18:16:49.018358 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 26 18:16:49.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:49.028199 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 26 18:16:49.041135 kernel: audit: type=1130 audit(1769451409.025:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:49.150949 ignition[889]: Ignition 2.24.0 Jan 26 18:16:49.150989 ignition[889]: Stage: fetch-offline Jan 26 18:16:49.151036 ignition[889]: no configs at "/usr/lib/ignition/base.d" Jan 26 18:16:49.151048 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 26 18:16:49.151143 ignition[889]: parsed url from cmdline: "" Jan 26 18:16:49.151147 ignition[889]: no config URL provided Jan 26 18:16:49.151152 ignition[889]: reading system config file "/usr/lib/ignition/user.ign" Jan 26 18:16:49.151162 ignition[889]: no config at "/usr/lib/ignition/user.ign" Jan 26 18:16:49.151203 ignition[889]: op(1): [started] loading QEMU firmware config module Jan 26 18:16:49.151208 ignition[889]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 26 18:16:49.176459 ignition[889]: op(1): [finished] loading QEMU firmware config module Jan 26 18:16:49.406822 ignition[889]: parsing config with SHA512: 99a7fd0cc7d37ea13d6215a477d4892ab4a7b70d22dda634351dda66bfbe431ea643836637382357e5da5e5a1b4007a4dee9df5260aace863ade41e58c6538d2 Jan 26 18:16:49.416112 unknown[889]: fetched base config from "system" Jan 26 18:16:49.416223 unknown[889]: fetched user config from "qemu" Jan 26 18:16:49.423491 ignition[889]: fetch-offline: fetch-offline passed Jan 26 18:16:49.423729 ignition[889]: Ignition finished successfully Jan 26 18:16:49.447208 kernel: audit: type=1130 audit(1769451409.432:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:49.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:49.427026 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 26 18:16:49.432799 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 26 18:16:49.433880 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 26 18:16:49.491377 ignition[899]: Ignition 2.24.0 Jan 26 18:16:49.491423 ignition[899]: Stage: kargs Jan 26 18:16:49.491733 ignition[899]: no configs at "/usr/lib/ignition/base.d" Jan 26 18:16:49.491752 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 26 18:16:49.492973 ignition[899]: kargs: kargs passed Jan 26 18:16:49.493025 ignition[899]: Ignition finished successfully Jan 26 18:16:49.507781 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 26 18:16:49.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:49.515696 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 26 18:16:49.527162 kernel: audit: type=1130 audit(1769451409.513:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:49.563399 ignition[906]: Ignition 2.24.0 Jan 26 18:16:49.563469 ignition[906]: Stage: disks Jan 26 18:16:49.563622 ignition[906]: no configs at "/usr/lib/ignition/base.d" Jan 26 18:16:49.563694 ignition[906]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 26 18:16:49.564543 ignition[906]: disks: disks passed Jan 26 18:16:49.587236 kernel: audit: type=1130 audit(1769451409.572:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:49.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:49.571843 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 26 18:16:49.564584 ignition[906]: Ignition finished successfully Jan 26 18:16:49.573560 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 26 18:16:49.588412 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 26 18:16:49.597532 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 26 18:16:49.603223 systemd[1]: Reached target sysinit.target - System Initialization. Jan 26 18:16:49.605569 systemd[1]: Reached target basic.target - Basic System. Jan 26 18:16:49.619500 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 26 18:16:49.688041 systemd-fsck[915]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 26 18:16:49.694586 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 26 18:16:49.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:49.708423 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 26 18:16:49.877771 kernel: EXT4-fs (vda9): mounted filesystem 348114a3-4c6d-4729-be31-f084b711617b r/w with ordered data mode. Quota mode: none. Jan 26 18:16:49.879279 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 26 18:16:49.885176 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 26 18:16:49.888286 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 26 18:16:49.901528 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 26 18:16:49.903584 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 26 18:16:49.903622 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 26 18:16:49.903696 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 26 18:16:49.928700 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (923) Jan 26 18:16:49.930011 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 26 18:16:49.945697 kernel: BTRFS info (device vda6): first mount of filesystem 74befb1c-e259-44ed-b7ad-b49e24b3bbbe Jan 26 18:16:49.945723 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 26 18:16:49.945737 kernel: BTRFS info (device vda6): turning on async discard Jan 26 18:16:49.945749 kernel: BTRFS info (device vda6): enabling free space tree Jan 26 18:16:49.941613 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 26 18:16:49.953074 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 26 18:16:50.201866 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 26 18:16:50.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:50.206786 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 26 18:16:50.212076 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 26 18:16:50.237369 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 26 18:16:50.243576 kernel: BTRFS info (device vda6): last unmount of filesystem 74befb1c-e259-44ed-b7ad-b49e24b3bbbe Jan 26 18:16:50.260315 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 26 18:16:50.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:50.276188 ignition[1022]: INFO : Ignition 2.24.0 Jan 26 18:16:50.276188 ignition[1022]: INFO : Stage: mount Jan 26 18:16:50.280784 ignition[1022]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 26 18:16:50.280784 ignition[1022]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 26 18:16:50.280784 ignition[1022]: INFO : mount: mount passed Jan 26 18:16:50.280784 ignition[1022]: INFO : Ignition finished successfully Jan 26 18:16:50.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:50.281496 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 26 18:16:50.286012 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 26 18:16:50.313902 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 26 18:16:50.346813 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1033) Jan 26 18:16:50.346842 kernel: BTRFS info (device vda6): first mount of filesystem 74befb1c-e259-44ed-b7ad-b49e24b3bbbe Jan 26 18:16:50.346856 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 26 18:16:50.358073 kernel: BTRFS info (device vda6): turning on async discard Jan 26 18:16:50.358100 kernel: BTRFS info (device vda6): enabling free space tree Jan 26 18:16:50.360231 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 26 18:16:50.410309 ignition[1050]: INFO : Ignition 2.24.0 Jan 26 18:16:50.410309 ignition[1050]: INFO : Stage: files Jan 26 18:16:50.414775 ignition[1050]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 26 18:16:50.414775 ignition[1050]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 26 18:16:50.414775 ignition[1050]: DEBUG : files: compiled without relabeling support, skipping Jan 26 18:16:50.424551 ignition[1050]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 26 18:16:50.424551 ignition[1050]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 26 18:16:50.434360 ignition[1050]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 26 18:16:50.438333 ignition[1050]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 26 18:16:50.438333 ignition[1050]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 26 18:16:50.435194 unknown[1050]: wrote ssh authorized keys file for user: core Jan 26 18:16:50.449035 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 26 18:16:50.449035 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 26 18:16:50.501833 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 26 18:16:50.630883 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 26 18:16:50.630883 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 26 18:16:50.643282 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 26 18:16:50.643282 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 26 18:16:50.643282 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 26 18:16:50.643282 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 26 18:16:50.643282 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 26 18:16:50.643282 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 26 18:16:50.643282 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 26 18:16:50.643282 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 26 18:16:50.643282 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 26 18:16:50.643282 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 26 18:16:50.643282 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 26 18:16:50.643282 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 26 18:16:50.643282 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 26 18:16:50.988829 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 26 18:16:51.892151 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 26 18:16:51.892151 ignition[1050]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 26 18:16:51.903776 ignition[1050]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 26 18:16:51.910210 ignition[1050]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 26 18:16:51.910210 ignition[1050]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 26 18:16:51.910210 ignition[1050]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 26 18:16:51.910210 ignition[1050]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 26 18:16:51.910210 ignition[1050]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 26 18:16:51.910210 ignition[1050]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 26 18:16:51.910210 ignition[1050]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 26 18:16:51.957600 ignition[1050]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 26 18:16:51.968293 ignition[1050]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 26 18:16:51.973722 ignition[1050]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 26 18:16:51.973722 ignition[1050]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 26 18:16:51.973722 ignition[1050]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 26 18:16:51.973722 ignition[1050]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 26 18:16:51.973722 ignition[1050]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 26 18:16:51.973722 ignition[1050]: INFO : files: files passed Jan 26 18:16:51.973722 ignition[1050]: INFO : Ignition finished successfully Jan 26 18:16:51.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:51.975602 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 26 18:16:51.983073 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 26 18:16:51.984692 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 26 18:16:52.030271 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 26 18:16:52.030498 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 26 18:16:52.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.040968 initrd-setup-root-after-ignition[1081]: grep: /sysroot/oem/oem-release: No such file or directory Jan 26 18:16:52.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.049338 initrd-setup-root-after-ignition[1087]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 26 18:16:52.043433 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 26 18:16:52.061995 initrd-setup-root-after-ignition[1083]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 26 18:16:52.061995 initrd-setup-root-after-ignition[1083]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 26 18:16:52.049946 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 26 18:16:52.071874 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 26 18:16:52.149972 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 26 18:16:52.150153 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 26 18:16:52.157434 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 26 18:16:52.159578 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 26 18:16:52.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.169198 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 26 18:16:52.174017 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 26 18:16:52.220341 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 26 18:16:52.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.229811 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 26 18:16:52.264819 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 26 18:16:52.265146 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 26 18:16:52.271434 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 26 18:16:52.284599 systemd[1]: Stopped target timers.target - Timer Units. Jan 26 18:16:52.288416 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 26 18:16:52.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.288624 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 26 18:16:52.295944 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 26 18:16:52.302610 systemd[1]: Stopped target basic.target - Basic System. Jan 26 18:16:52.309758 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 26 18:16:52.316864 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 26 18:16:52.319308 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 26 18:16:52.334291 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 26 18:16:52.336778 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 26 18:16:52.351378 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 26 18:16:52.353368 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 26 18:16:52.365813 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 26 18:16:52.366582 systemd[1]: Stopped target swap.target - Swaps. Jan 26 18:16:52.368279 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 26 18:16:52.368414 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 26 18:16:52.384172 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 26 18:16:52.378000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.390238 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 26 18:16:52.397548 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 26 18:16:52.404553 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 26 18:16:52.406471 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 26 18:16:52.406591 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 26 18:16:52.418000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.422809 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 26 18:16:52.422980 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 26 18:16:52.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.425482 systemd[1]: Stopped target paths.target - Path Units. Jan 26 18:16:52.434579 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 26 18:16:52.439007 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 26 18:16:52.446240 systemd[1]: Stopped target slices.target - Slice Units. Jan 26 18:16:52.453907 systemd[1]: Stopped target sockets.target - Socket Units. Jan 26 18:16:52.459801 systemd[1]: iscsid.socket: Deactivated successfully. Jan 26 18:16:52.459926 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 26 18:16:52.465996 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 26 18:16:52.466103 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 26 18:16:52.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.471919 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 26 18:16:52.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.472019 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 26 18:16:52.478370 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 26 18:16:52.478534 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 26 18:16:52.481404 systemd[1]: ignition-files.service: Deactivated successfully. Jan 26 18:16:52.481568 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 26 18:16:52.491563 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 26 18:16:52.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.495488 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 26 18:16:52.495626 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 26 18:16:52.530593 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 26 18:16:52.532488 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 26 18:16:52.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.532681 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 26 18:16:52.537782 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 26 18:16:52.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.537914 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 26 18:16:52.562819 ignition[1107]: INFO : Ignition 2.24.0 Jan 26 18:16:52.562819 ignition[1107]: INFO : Stage: umount Jan 26 18:16:52.562819 ignition[1107]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 26 18:16:52.562819 ignition[1107]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 26 18:16:52.562819 ignition[1107]: INFO : umount: umount passed Jan 26 18:16:52.562819 ignition[1107]: INFO : Ignition finished successfully Jan 26 18:16:52.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.548086 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 26 18:16:52.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.548243 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 26 18:16:52.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.561359 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 26 18:16:52.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.568508 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 26 18:16:52.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.580775 systemd[1]: Stopped target network.target - Network. Jan 26 18:16:52.586344 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 26 18:16:52.586405 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 26 18:16:52.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.587701 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 26 18:16:52.587752 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 26 18:16:52.594606 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 26 18:16:52.594716 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 26 18:16:52.601400 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 26 18:16:52.601486 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 26 18:16:52.608430 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 26 18:16:52.613722 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 26 18:16:52.621948 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 26 18:16:52.622104 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 26 18:16:52.665939 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 26 18:16:52.669255 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 26 18:16:52.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.679099 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 26 18:16:52.682430 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 26 18:16:52.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.691308 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 26 18:16:52.692000 audit: BPF prog-id=9 op=UNLOAD Jan 26 18:16:52.693159 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 26 18:16:52.698784 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 26 18:16:52.698849 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 26 18:16:52.708000 audit: BPF prog-id=6 op=UNLOAD Jan 26 18:16:52.713024 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 26 18:16:52.714727 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 26 18:16:52.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.714832 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 26 18:16:52.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.724614 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 26 18:16:52.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.724722 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 26 18:16:52.726564 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 26 18:16:52.726611 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 26 18:16:52.736300 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 26 18:16:52.781976 kernel: kauditd_printk_skb: 35 callbacks suppressed Jan 26 18:16:52.781999 kernel: audit: type=1131 audit(1769451412.756:71): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.782013 kernel: audit: type=1131 audit(1769451412.769:72): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.745839 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 26 18:16:52.745944 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 26 18:16:52.757159 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 26 18:16:52.757245 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 26 18:16:52.796579 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 26 18:16:52.796876 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 26 18:16:52.803270 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 26 18:16:52.803319 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 26 18:16:52.809885 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 26 18:16:52.809923 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 26 18:16:52.852319 kernel: audit: type=1131 audit(1769451412.802:73): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.852339 kernel: audit: type=1131 audit(1769451412.821:74): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.852352 kernel: audit: type=1131 audit(1769451412.842:75): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.842000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.815947 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 26 18:16:52.815997 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 26 18:16:52.871431 kernel: audit: type=1131 audit(1769451412.859:76): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.823212 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 26 18:16:52.823261 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 26 18:16:52.853818 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 26 18:16:52.853901 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 26 18:16:52.885263 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 26 18:16:52.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.888957 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 26 18:16:52.905363 kernel: audit: type=1131 audit(1769451412.890:77): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.889012 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 26 18:16:52.921285 kernel: audit: type=1131 audit(1769451412.910:78): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.899892 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 26 18:16:52.939011 kernel: audit: type=1131 audit(1769451412.926:79): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.899943 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 26 18:16:52.911019 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 26 18:16:52.911070 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 26 18:16:52.958394 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 26 18:16:52.975070 kernel: audit: type=1130 audit(1769451412.960:80): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.958605 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 26 18:16:52.996122 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 26 18:16:52.996348 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 26 18:16:52.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:52.999851 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 26 18:16:53.010814 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 26 18:16:53.026100 systemd[1]: Switching root. Jan 26 18:16:53.060302 systemd-journald[320]: Journal stopped Jan 26 18:16:54.609542 systemd-journald[320]: Received SIGTERM from PID 1 (systemd). Jan 26 18:16:54.609606 kernel: SELinux: policy capability network_peer_controls=1 Jan 26 18:16:54.609621 kernel: SELinux: policy capability open_perms=1 Jan 26 18:16:54.609683 kernel: SELinux: policy capability extended_socket_class=1 Jan 26 18:16:54.609704 kernel: SELinux: policy capability always_check_network=0 Jan 26 18:16:54.609715 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 26 18:16:54.609730 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 26 18:16:54.609744 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 26 18:16:54.609755 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 26 18:16:54.609766 kernel: SELinux: policy capability userspace_initial_context=0 Jan 26 18:16:54.609786 systemd[1]: Successfully loaded SELinux policy in 79.845ms. Jan 26 18:16:54.609822 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.064ms. Jan 26 18:16:54.609841 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 26 18:16:54.609854 systemd[1]: Detected virtualization kvm. Jan 26 18:16:54.609866 systemd[1]: Detected architecture x86-64. Jan 26 18:16:54.609878 systemd[1]: Detected first boot. Jan 26 18:16:54.609890 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 26 18:16:54.609902 zram_generator::config[1151]: No configuration found. Jan 26 18:16:54.609918 kernel: Guest personality initialized and is inactive Jan 26 18:16:54.609930 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 26 18:16:54.609941 kernel: Initialized host personality Jan 26 18:16:54.609952 kernel: NET: Registered PF_VSOCK protocol family Jan 26 18:16:54.609963 systemd[1]: Populated /etc with preset unit settings. Jan 26 18:16:54.609976 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 26 18:16:54.609988 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 26 18:16:54.610002 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 26 18:16:54.610017 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 26 18:16:54.610029 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 26 18:16:54.610044 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 26 18:16:54.610056 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 26 18:16:54.610068 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 26 18:16:54.610082 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 26 18:16:54.610094 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 26 18:16:54.610105 systemd[1]: Created slice user.slice - User and Session Slice. Jan 26 18:16:54.610117 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 26 18:16:54.610129 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 26 18:16:54.610141 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 26 18:16:54.610153 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 26 18:16:54.610166 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 26 18:16:54.610179 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 26 18:16:54.610190 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 26 18:16:54.610202 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 26 18:16:54.610213 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 26 18:16:54.610225 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 26 18:16:54.610240 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 26 18:16:54.610251 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 26 18:16:54.610263 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 26 18:16:54.610275 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 26 18:16:54.610286 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 26 18:16:54.610298 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 26 18:16:54.610310 systemd[1]: Reached target slices.target - Slice Units. Jan 26 18:16:54.610321 systemd[1]: Reached target swap.target - Swaps. Jan 26 18:16:54.610335 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 26 18:16:54.610346 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 26 18:16:54.610358 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 26 18:16:54.610371 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 26 18:16:54.610383 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 26 18:16:54.610394 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 26 18:16:54.610406 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 26 18:16:54.610419 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 26 18:16:54.610431 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 26 18:16:54.610442 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 26 18:16:54.610497 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 26 18:16:54.610510 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 26 18:16:54.610522 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 26 18:16:54.610535 systemd[1]: Mounting media.mount - External Media Directory... Jan 26 18:16:54.610550 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 26 18:16:54.610561 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 26 18:16:54.610573 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 26 18:16:54.610585 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 26 18:16:54.610597 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 26 18:16:54.610609 systemd[1]: Reached target machines.target - Containers. Jan 26 18:16:54.610623 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 26 18:16:54.610687 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 26 18:16:54.610700 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 26 18:16:54.610712 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 26 18:16:54.610724 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 26 18:16:54.610736 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 26 18:16:54.610752 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 26 18:16:54.610766 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 26 18:16:54.610779 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 26 18:16:54.610791 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 26 18:16:54.610803 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 26 18:16:54.610814 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 26 18:16:54.610826 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 26 18:16:54.610837 systemd[1]: Stopped systemd-fsck-usr.service. Jan 26 18:16:54.610853 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 26 18:16:54.610865 kernel: ACPI: bus type drm_connector registered Jan 26 18:16:54.610876 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 26 18:16:54.610887 kernel: fuse: init (API version 7.41) Jan 26 18:16:54.610898 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 26 18:16:54.610912 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 26 18:16:54.610924 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 26 18:16:54.610936 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 26 18:16:54.610967 systemd-journald[1233]: Collecting audit messages is enabled. Jan 26 18:16:54.610992 systemd-journald[1233]: Journal started Jan 26 18:16:54.611014 systemd-journald[1233]: Runtime Journal (/run/log/journal/670972c76d3b41518e746166fadcb565) is 6M, max 48.2M, 42.1M free. Jan 26 18:16:54.280000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 26 18:16:54.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:54.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:54.541000 audit: BPF prog-id=14 op=UNLOAD Jan 26 18:16:54.542000 audit: BPF prog-id=13 op=UNLOAD Jan 26 18:16:54.544000 audit: BPF prog-id=15 op=LOAD Jan 26 18:16:54.545000 audit: BPF prog-id=16 op=LOAD Jan 26 18:16:54.547000 audit: BPF prog-id=17 op=LOAD Jan 26 18:16:54.607000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 26 18:16:54.607000 audit[1233]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffc5273c310 a2=4000 a3=0 items=0 ppid=1 pid=1233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:16:54.607000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 26 18:16:54.027401 systemd[1]: Queued start job for default target multi-user.target. Jan 26 18:16:54.048418 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 26 18:16:54.049218 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 26 18:16:54.049835 systemd[1]: systemd-journald.service: Consumed 1.028s CPU time. Jan 26 18:16:54.616967 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 26 18:16:54.626727 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 26 18:16:54.633208 systemd[1]: Started systemd-journald.service - Journal Service. Jan 26 18:16:54.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:54.636835 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 26 18:16:54.640552 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 26 18:16:54.645295 systemd[1]: Mounted media.mount - External Media Directory. Jan 26 18:16:54.648623 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 26 18:16:54.652292 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 26 18:16:54.656000 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 26 18:16:54.659719 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 26 18:16:54.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:54.664164 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 26 18:16:54.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:54.668926 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 26 18:16:54.669238 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 26 18:16:54.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:54.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:54.673694 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 26 18:16:54.673963 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 26 18:16:54.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:54.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:54.678167 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 26 18:16:54.678440 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 26 18:16:54.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:54.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:54.682375 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 26 18:16:54.682767 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 26 18:16:54.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:54.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:54.687142 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 26 18:16:54.687484 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 26 18:16:54.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:54.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:54.691714 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 26 18:16:54.692018 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 26 18:16:54.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:54.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:54.696186 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 26 18:16:54.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:54.700922 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 26 18:16:54.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:54.706401 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 26 18:16:54.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:54.711212 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 26 18:16:54.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:54.730402 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 26 18:16:54.734901 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 26 18:16:54.740525 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 26 18:16:54.746344 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 26 18:16:54.750020 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 26 18:16:54.750075 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 26 18:16:54.754532 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 26 18:16:54.758856 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 26 18:16:54.759010 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 26 18:16:54.762417 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 26 18:16:54.768972 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 26 18:16:54.772954 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 26 18:16:54.774329 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 26 18:16:54.777990 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 26 18:16:54.780508 systemd-journald[1233]: Time spent on flushing to /var/log/journal/670972c76d3b41518e746166fadcb565 is 14.289ms for 1101 entries. Jan 26 18:16:54.780508 systemd-journald[1233]: System Journal (/var/log/journal/670972c76d3b41518e746166fadcb565) is 8M, max 163.5M, 155.5M free. Jan 26 18:16:54.802233 systemd-journald[1233]: Received client request to flush runtime journal. Jan 26 18:16:54.781807 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 26 18:16:54.795906 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 26 18:16:54.801123 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 26 18:16:54.807954 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 26 18:16:54.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:54.812235 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 26 18:16:54.816429 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 26 18:16:54.821318 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 26 18:16:54.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:54.831724 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 26 18:16:54.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:54.836236 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 26 18:16:54.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:54.845904 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 26 18:16:54.848845 kernel: loop1: detected capacity change from 0 to 224512 Jan 26 18:16:54.855846 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 26 18:16:54.876239 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 26 18:16:54.885770 kernel: loop2: detected capacity change from 0 to 50784 Jan 26 18:16:54.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:54.887000 audit: BPF prog-id=18 op=LOAD Jan 26 18:16:54.888000 audit: BPF prog-id=19 op=LOAD Jan 26 18:16:54.888000 audit: BPF prog-id=20 op=LOAD Jan 26 18:16:54.890326 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 26 18:16:54.896000 audit: BPF prog-id=21 op=LOAD Jan 26 18:16:54.898967 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 26 18:16:54.907313 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 26 18:16:54.912890 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 26 18:16:54.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:54.923000 audit: BPF prog-id=22 op=LOAD Jan 26 18:16:54.924000 audit: BPF prog-id=23 op=LOAD Jan 26 18:16:54.924000 audit: BPF prog-id=24 op=LOAD Jan 26 18:16:54.926234 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 26 18:16:54.931000 audit: BPF prog-id=25 op=LOAD Jan 26 18:16:54.931000 audit: BPF prog-id=26 op=LOAD Jan 26 18:16:54.931000 audit: BPF prog-id=27 op=LOAD Jan 26 18:16:54.935204 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 26 18:16:54.945718 kernel: loop3: detected capacity change from 0 to 111560 Jan 26 18:16:54.957723 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Jan 26 18:16:54.957767 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Jan 26 18:16:54.966834 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 26 18:16:54.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:55.002748 kernel: loop4: detected capacity change from 0 to 224512 Jan 26 18:16:55.003172 systemd-nsresourced[1293]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 26 18:16:55.005521 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 26 18:16:55.012327 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 26 18:16:55.019687 kernel: loop5: detected capacity change from 0 to 50784 Jan 26 18:16:55.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:55.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:55.036739 kernel: loop6: detected capacity change from 0 to 111560 Jan 26 18:16:55.049954 (sd-merge)[1298]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 26 18:16:55.053042 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 26 18:16:55.054743 (sd-merge)[1298]: Merged extensions into '/usr'. Jan 26 18:16:55.059868 systemd[1]: Reload requested from client PID 1271 ('systemd-sysext') (unit systemd-sysext.service)... Jan 26 18:16:55.059887 systemd[1]: Reloading... Jan 26 18:16:55.099821 systemd-oomd[1288]: No swap; memory pressure usage will be degraded Jan 26 18:16:55.131937 systemd-resolved[1289]: Positive Trust Anchors: Jan 26 18:16:55.131984 systemd-resolved[1289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 26 18:16:55.131989 systemd-resolved[1289]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 26 18:16:55.132016 systemd-resolved[1289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 26 18:16:55.140539 systemd-resolved[1289]: Defaulting to hostname 'linux'. Jan 26 18:16:55.141735 zram_generator::config[1342]: No configuration found. Jan 26 18:16:55.368706 systemd[1]: Reloading finished in 308 ms. Jan 26 18:16:55.406278 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 26 18:16:55.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:55.411087 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 26 18:16:55.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:55.415551 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 26 18:16:55.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:55.420450 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 26 18:16:55.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:55.431720 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 26 18:16:55.451588 systemd[1]: Starting ensure-sysext.service... Jan 26 18:16:55.456060 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 26 18:16:55.459000 audit: BPF prog-id=8 op=UNLOAD Jan 26 18:16:55.460000 audit: BPF prog-id=7 op=UNLOAD Jan 26 18:16:55.460000 audit: BPF prog-id=28 op=LOAD Jan 26 18:16:55.460000 audit: BPF prog-id=29 op=LOAD Jan 26 18:16:55.466000 audit: BPF prog-id=30 op=LOAD Jan 26 18:16:55.466000 audit: BPF prog-id=22 op=UNLOAD Jan 26 18:16:55.466000 audit: BPF prog-id=31 op=LOAD Jan 26 18:16:55.466000 audit: BPF prog-id=32 op=LOAD Jan 26 18:16:55.466000 audit: BPF prog-id=23 op=UNLOAD Jan 26 18:16:55.466000 audit: BPF prog-id=24 op=UNLOAD Jan 26 18:16:55.467000 audit: BPF prog-id=33 op=LOAD Jan 26 18:16:55.467000 audit: BPF prog-id=18 op=UNLOAD Jan 26 18:16:55.467000 audit: BPF prog-id=34 op=LOAD Jan 26 18:16:55.467000 audit: BPF prog-id=35 op=LOAD Jan 26 18:16:55.467000 audit: BPF prog-id=19 op=UNLOAD Jan 26 18:16:55.467000 audit: BPF prog-id=20 op=UNLOAD Jan 26 18:16:55.462306 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 26 18:16:55.469000 audit: BPF prog-id=36 op=LOAD Jan 26 18:16:55.469000 audit: BPF prog-id=25 op=UNLOAD Jan 26 18:16:55.469000 audit: BPF prog-id=37 op=LOAD Jan 26 18:16:55.469000 audit: BPF prog-id=38 op=LOAD Jan 26 18:16:55.469000 audit: BPF prog-id=26 op=UNLOAD Jan 26 18:16:55.469000 audit: BPF prog-id=27 op=UNLOAD Jan 26 18:16:55.470000 audit: BPF prog-id=39 op=LOAD Jan 26 18:16:55.470000 audit: BPF prog-id=21 op=UNLOAD Jan 26 18:16:55.473000 audit: BPF prog-id=40 op=LOAD Jan 26 18:16:55.473000 audit: BPF prog-id=15 op=UNLOAD Jan 26 18:16:55.473000 audit: BPF prog-id=41 op=LOAD Jan 26 18:16:55.473000 audit: BPF prog-id=42 op=LOAD Jan 26 18:16:55.473000 audit: BPF prog-id=16 op=UNLOAD Jan 26 18:16:55.473000 audit: BPF prog-id=17 op=UNLOAD Jan 26 18:16:55.482851 systemd[1]: Reload requested from client PID 1379 ('systemctl') (unit ensure-sysext.service)... Jan 26 18:16:55.482898 systemd[1]: Reloading... Jan 26 18:16:55.483576 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 26 18:16:55.483790 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 26 18:16:55.484092 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 26 18:16:55.485842 systemd-tmpfiles[1380]: ACLs are not supported, ignoring. Jan 26 18:16:55.485949 systemd-tmpfiles[1380]: ACLs are not supported, ignoring. Jan 26 18:16:55.493955 systemd-tmpfiles[1380]: Detected autofs mount point /boot during canonicalization of boot. Jan 26 18:16:55.494062 systemd-tmpfiles[1380]: Skipping /boot Jan 26 18:16:55.496414 systemd-udevd[1381]: Using default interface naming scheme 'v257'. Jan 26 18:16:55.511051 systemd-tmpfiles[1380]: Detected autofs mount point /boot during canonicalization of boot. Jan 26 18:16:55.511285 systemd-tmpfiles[1380]: Skipping /boot Jan 26 18:16:55.548722 zram_generator::config[1410]: No configuration found. Jan 26 18:16:55.675698 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 26 18:16:55.682686 kernel: ACPI: button: Power Button [PWRF] Jan 26 18:16:55.690724 kernel: mousedev: PS/2 mouse device common for all mice Jan 26 18:16:55.736739 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 26 18:16:55.737084 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 26 18:16:55.831809 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 26 18:16:55.832023 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 26 18:16:55.838108 systemd[1]: Reloading finished in 354 ms. Jan 26 18:16:55.851751 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 26 18:16:55.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:55.858000 audit: BPF prog-id=43 op=LOAD Jan 26 18:16:55.859000 audit: BPF prog-id=36 op=UNLOAD Jan 26 18:16:55.859000 audit: BPF prog-id=44 op=LOAD Jan 26 18:16:55.859000 audit: BPF prog-id=45 op=LOAD Jan 26 18:16:55.859000 audit: BPF prog-id=37 op=UNLOAD Jan 26 18:16:55.859000 audit: BPF prog-id=38 op=UNLOAD Jan 26 18:16:55.861000 audit: BPF prog-id=46 op=LOAD Jan 26 18:16:55.861000 audit: BPF prog-id=40 op=UNLOAD Jan 26 18:16:55.861000 audit: BPF prog-id=47 op=LOAD Jan 26 18:16:55.861000 audit: BPF prog-id=48 op=LOAD Jan 26 18:16:55.861000 audit: BPF prog-id=41 op=UNLOAD Jan 26 18:16:55.861000 audit: BPF prog-id=42 op=UNLOAD Jan 26 18:16:55.862000 audit: BPF prog-id=49 op=LOAD Jan 26 18:16:55.862000 audit: BPF prog-id=39 op=UNLOAD Jan 26 18:16:55.930000 audit: BPF prog-id=50 op=LOAD Jan 26 18:16:55.930000 audit: BPF prog-id=30 op=UNLOAD Jan 26 18:16:55.930000 audit: BPF prog-id=51 op=LOAD Jan 26 18:16:55.930000 audit: BPF prog-id=52 op=LOAD Jan 26 18:16:55.930000 audit: BPF prog-id=31 op=UNLOAD Jan 26 18:16:55.930000 audit: BPF prog-id=32 op=UNLOAD Jan 26 18:16:55.931000 audit: BPF prog-id=53 op=LOAD Jan 26 18:16:55.931000 audit: BPF prog-id=54 op=LOAD Jan 26 18:16:55.931000 audit: BPF prog-id=28 op=UNLOAD Jan 26 18:16:55.931000 audit: BPF prog-id=29 op=UNLOAD Jan 26 18:16:55.935000 audit: BPF prog-id=55 op=LOAD Jan 26 18:16:55.935000 audit: BPF prog-id=33 op=UNLOAD Jan 26 18:16:55.935000 audit: BPF prog-id=56 op=LOAD Jan 26 18:16:55.936000 audit: BPF prog-id=57 op=LOAD Jan 26 18:16:55.936000 audit: BPF prog-id=34 op=UNLOAD Jan 26 18:16:55.936000 audit: BPF prog-id=35 op=UNLOAD Jan 26 18:16:55.960031 kernel: kvm_amd: TSC scaling supported Jan 26 18:16:55.960091 kernel: kvm_amd: Nested Virtualization enabled Jan 26 18:16:55.960118 kernel: kvm_amd: Nested Paging enabled Jan 26 18:16:55.962898 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 26 18:16:55.962875 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 26 18:16:55.966746 kernel: kvm_amd: PMU virtualization is disabled Jan 26 18:16:55.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:56.053179 systemd[1]: Finished ensure-sysext.service. Jan 26 18:16:56.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:56.062740 kernel: EDAC MC: Ver: 3.0.0 Jan 26 18:16:56.082080 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 26 18:16:56.084043 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 26 18:16:56.089961 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 26 18:16:56.094321 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 26 18:16:56.107162 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 26 18:16:56.114450 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 26 18:16:56.120336 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 26 18:16:56.125587 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 26 18:16:56.129140 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 26 18:16:56.129292 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 26 18:16:56.130913 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 26 18:16:56.140364 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 26 18:16:56.144288 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 26 18:16:56.147773 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 26 18:16:56.153000 audit: BPF prog-id=58 op=LOAD Jan 26 18:16:56.156253 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 26 18:16:56.159000 audit: BPF prog-id=59 op=LOAD Jan 26 18:16:56.166049 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 26 18:16:56.172162 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 26 18:16:56.181700 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 26 18:16:56.186766 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 26 18:16:56.188418 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 26 18:16:56.188819 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 26 18:16:56.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:56.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:56.197000 audit[1523]: SYSTEM_BOOT pid=1523 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 26 18:16:56.193868 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 26 18:16:56.194093 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 26 18:16:56.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:56.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:56.198861 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 26 18:16:56.199185 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 26 18:16:56.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:56.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:16:56.204956 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 26 18:16:56.210380 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 26 18:16:56.213256 augenrules[1528]: No rules Jan 26 18:16:56.212000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 26 18:16:56.212000 audit[1528]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc4f5d4d30 a2=420 a3=0 items=0 ppid=1495 pid=1528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:16:56.212000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 26 18:16:56.217609 systemd[1]: audit-rules.service: Deactivated successfully. Jan 26 18:16:56.218921 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 26 18:16:56.226360 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 26 18:16:56.232374 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 26 18:16:56.251529 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 26 18:16:56.252085 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 26 18:16:56.255004 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 26 18:16:56.275451 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 26 18:16:56.278279 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 26 18:16:56.338986 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 26 18:16:56.348594 systemd-networkd[1518]: lo: Link UP Jan 26 18:16:56.348935 systemd-networkd[1518]: lo: Gained carrier Jan 26 18:16:56.351111 systemd-networkd[1518]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 26 18:16:56.351530 systemd-networkd[1518]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 26 18:16:56.354006 systemd-networkd[1518]: eth0: Link UP Jan 26 18:16:56.355310 systemd-networkd[1518]: eth0: Gained carrier Jan 26 18:16:56.355328 systemd-networkd[1518]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 26 18:16:56.387967 systemd-networkd[1518]: eth0: DHCPv4 address 10.0.0.64/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 26 18:16:56.390056 systemd-timesyncd[1521]: Network configuration changed, trying to establish connection. Jan 26 18:16:57.164272 systemd-resolved[1289]: Clock change detected. Flushing caches. Jan 26 18:16:57.164327 systemd-timesyncd[1521]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 26 18:16:57.164380 systemd-timesyncd[1521]: Initial clock synchronization to Mon 2026-01-26 18:16:57.164206 UTC. Jan 26 18:16:57.300584 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 26 18:16:57.306951 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 26 18:16:57.315229 systemd[1]: Reached target network.target - Network. Jan 26 18:16:57.319926 systemd[1]: Reached target time-set.target - System Time Set. Jan 26 18:16:57.327043 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 26 18:16:57.335201 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 26 18:16:57.365706 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 26 18:16:57.510543 ldconfig[1507]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 26 18:16:57.518192 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 26 18:16:57.526014 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 26 18:16:57.558692 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 26 18:16:57.563586 systemd[1]: Reached target sysinit.target - System Initialization. Jan 26 18:16:57.568229 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 26 18:16:57.573200 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 26 18:16:57.578448 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 26 18:16:57.583327 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 26 18:16:57.587880 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 26 18:16:57.592942 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 26 18:16:57.598033 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 26 18:16:57.602346 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 26 18:16:57.607517 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 26 18:16:57.607601 systemd[1]: Reached target paths.target - Path Units. Jan 26 18:16:57.610993 systemd[1]: Reached target timers.target - Timer Units. Jan 26 18:16:57.615226 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 26 18:16:57.621708 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 26 18:16:57.627996 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 26 18:16:57.632720 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 26 18:16:57.637234 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 26 18:16:57.643704 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 26 18:16:57.648398 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 26 18:16:57.654128 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 26 18:16:57.659398 systemd[1]: Reached target sockets.target - Socket Units. Jan 26 18:16:57.663372 systemd[1]: Reached target basic.target - Basic System. Jan 26 18:16:57.667210 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 26 18:16:57.667289 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 26 18:16:57.668993 systemd[1]: Starting containerd.service - containerd container runtime... Jan 26 18:16:57.674157 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 26 18:16:57.684440 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 26 18:16:57.690799 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 26 18:16:57.701931 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 26 18:16:57.706026 jq[1563]: false Jan 26 18:16:57.706007 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 26 18:16:57.707446 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 26 18:16:57.712925 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 26 18:16:57.718191 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 26 18:16:57.725286 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 26 18:16:57.734964 extend-filesystems[1564]: Found /dev/vda6 Jan 26 18:16:57.733058 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 26 18:16:57.744468 oslogin_cache_refresh[1565]: Refreshing passwd entry cache Jan 26 18:16:57.751470 extend-filesystems[1564]: Found /dev/vda9 Jan 26 18:16:57.760014 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Refreshing passwd entry cache Jan 26 18:16:57.744912 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 26 18:16:57.760290 extend-filesystems[1564]: Checking size of /dev/vda9 Jan 26 18:16:57.752303 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 26 18:16:57.764590 oslogin_cache_refresh[1565]: Failure getting users, quitting Jan 26 18:16:57.770143 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Failure getting users, quitting Jan 26 18:16:57.770143 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 26 18:16:57.770143 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Refreshing group entry cache Jan 26 18:16:57.753005 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 26 18:16:57.764611 oslogin_cache_refresh[1565]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 26 18:16:57.754026 systemd[1]: Starting update-engine.service - Update Engine... Jan 26 18:16:57.764751 oslogin_cache_refresh[1565]: Refreshing group entry cache Jan 26 18:16:57.765368 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 26 18:16:57.776159 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 26 18:16:57.783925 extend-filesystems[1564]: Resized partition /dev/vda9 Jan 26 18:16:57.782021 oslogin_cache_refresh[1565]: Failure getting groups, quitting Jan 26 18:16:57.781945 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 26 18:16:57.800805 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Failure getting groups, quitting Jan 26 18:16:57.800805 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 26 18:16:57.782037 oslogin_cache_refresh[1565]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 26 18:16:57.782269 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 26 18:16:57.801044 jq[1584]: true Jan 26 18:16:57.782616 systemd[1]: motdgen.service: Deactivated successfully. Jan 26 18:16:57.783011 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 26 18:16:57.786089 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 26 18:16:57.786373 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 26 18:16:57.792462 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 26 18:16:57.793015 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 26 18:16:57.813333 extend-filesystems[1598]: resize2fs 1.47.3 (8-Jul-2025) Jan 26 18:16:57.835988 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 26 18:16:57.836048 update_engine[1581]: I20260126 18:16:57.817458 1581 main.cc:92] Flatcar Update Engine starting Jan 26 18:16:57.843570 jq[1596]: true Jan 26 18:16:57.865507 tar[1589]: linux-amd64/LICENSE Jan 26 18:16:57.865507 tar[1589]: linux-amd64/helm Jan 26 18:16:57.905946 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 26 18:16:57.899511 dbus-daemon[1561]: [system] SELinux support is enabled Jan 26 18:16:57.899898 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 26 18:16:57.913999 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 26 18:16:57.937061 update_engine[1581]: I20260126 18:16:57.920154 1581 update_check_scheduler.cc:74] Next update check in 2m47s Jan 26 18:16:57.914027 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 26 18:16:57.920510 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 26 18:16:57.920535 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 26 18:16:57.928800 systemd[1]: Started update-engine.service - Update Engine. Jan 26 18:16:57.937808 systemd-logind[1579]: Watching system buttons on /dev/input/event2 (Power Button) Jan 26 18:16:57.938170 systemd-logind[1579]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 26 18:16:57.942427 sshd_keygen[1594]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 26 18:16:57.942535 systemd-logind[1579]: New seat seat0. Jan 26 18:16:57.944499 extend-filesystems[1598]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 26 18:16:57.944499 extend-filesystems[1598]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 26 18:16:57.944499 extend-filesystems[1598]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 26 18:16:57.970729 extend-filesystems[1564]: Resized filesystem in /dev/vda9 Jan 26 18:16:57.971755 bash[1628]: Updated "/home/core/.ssh/authorized_keys" Jan 26 18:16:57.949204 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 26 18:16:57.958219 systemd[1]: Started systemd-logind.service - User Login Management. Jan 26 18:16:57.976587 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 26 18:16:57.978120 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 26 18:16:57.987604 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 26 18:16:57.994122 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 26 18:16:58.007498 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 26 18:16:58.010873 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 26 18:16:58.031455 systemd[1]: issuegen.service: Deactivated successfully. Jan 26 18:16:58.032942 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 26 18:16:58.039268 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 26 18:16:58.040959 locksmithd[1629]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 26 18:16:58.071554 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 26 18:16:58.077700 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 26 18:16:58.084327 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 26 18:16:58.089185 systemd[1]: Reached target getty.target - Login Prompts. Jan 26 18:16:58.093680 containerd[1597]: time="2026-01-26T18:16:58Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 26 18:16:58.094531 containerd[1597]: time="2026-01-26T18:16:58.094457095Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 26 18:16:58.104050 containerd[1597]: time="2026-01-26T18:16:58.103996571Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.126µs" Jan 26 18:16:58.104050 containerd[1597]: time="2026-01-26T18:16:58.104019043Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 26 18:16:58.104050 containerd[1597]: time="2026-01-26T18:16:58.104050993Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 26 18:16:58.104050 containerd[1597]: time="2026-01-26T18:16:58.104062174Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 26 18:16:58.104308 containerd[1597]: time="2026-01-26T18:16:58.104202846Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 26 18:16:58.104308 containerd[1597]: time="2026-01-26T18:16:58.104251487Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 26 18:16:58.104541 containerd[1597]: time="2026-01-26T18:16:58.104357264Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 26 18:16:58.104541 containerd[1597]: time="2026-01-26T18:16:58.104390266Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 26 18:16:58.104889 containerd[1597]: time="2026-01-26T18:16:58.104739603Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 26 18:16:58.104889 containerd[1597]: time="2026-01-26T18:16:58.104801399Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 26 18:16:58.104889 containerd[1597]: time="2026-01-26T18:16:58.104876028Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 26 18:16:58.104889 containerd[1597]: time="2026-01-26T18:16:58.104887700Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 26 18:16:58.105230 containerd[1597]: time="2026-01-26T18:16:58.105166841Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 26 18:16:58.105230 containerd[1597]: time="2026-01-26T18:16:58.105215421Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 26 18:16:58.105918 containerd[1597]: time="2026-01-26T18:16:58.105416787Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 26 18:16:58.105918 containerd[1597]: time="2026-01-26T18:16:58.105721697Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 26 18:16:58.105918 containerd[1597]: time="2026-01-26T18:16:58.105756612Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 26 18:16:58.105918 containerd[1597]: time="2026-01-26T18:16:58.105767131Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 26 18:16:58.105918 containerd[1597]: time="2026-01-26T18:16:58.105802528Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 26 18:16:58.106244 containerd[1597]: time="2026-01-26T18:16:58.106185923Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 26 18:16:58.106321 containerd[1597]: time="2026-01-26T18:16:58.106283525Z" level=info msg="metadata content store policy set" policy=shared Jan 26 18:16:58.112483 containerd[1597]: time="2026-01-26T18:16:58.112357853Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 26 18:16:58.112483 containerd[1597]: time="2026-01-26T18:16:58.112474912Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 26 18:16:58.112759 containerd[1597]: time="2026-01-26T18:16:58.112577694Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 26 18:16:58.112759 containerd[1597]: time="2026-01-26T18:16:58.112622988Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 26 18:16:58.112759 containerd[1597]: time="2026-01-26T18:16:58.112644468Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 26 18:16:58.112759 containerd[1597]: time="2026-01-26T18:16:58.112699200Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 26 18:16:58.112759 containerd[1597]: time="2026-01-26T18:16:58.112711593Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 26 18:16:58.112759 containerd[1597]: time="2026-01-26T18:16:58.112723335Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 26 18:16:58.112759 containerd[1597]: time="2026-01-26T18:16:58.112749193Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 26 18:16:58.112759 containerd[1597]: time="2026-01-26T18:16:58.112761145Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 26 18:16:58.113016 containerd[1597]: time="2026-01-26T18:16:58.112772016Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 26 18:16:58.113016 containerd[1597]: time="2026-01-26T18:16:58.112784219Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 26 18:16:58.113016 containerd[1597]: time="2026-01-26T18:16:58.112793907Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 26 18:16:58.113016 containerd[1597]: time="2026-01-26T18:16:58.112806340Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 26 18:16:58.113016 containerd[1597]: time="2026-01-26T18:16:58.112996916Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 26 18:16:58.113110 containerd[1597]: time="2026-01-26T18:16:58.113018597Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 26 18:16:58.113110 containerd[1597]: time="2026-01-26T18:16:58.113032753Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 26 18:16:58.113110 containerd[1597]: time="2026-01-26T18:16:58.113044194Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 26 18:16:58.113110 containerd[1597]: time="2026-01-26T18:16:58.113055695Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 26 18:16:58.113110 containerd[1597]: time="2026-01-26T18:16:58.113065244Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 26 18:16:58.113110 containerd[1597]: time="2026-01-26T18:16:58.113075993Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 26 18:16:58.113110 containerd[1597]: time="2026-01-26T18:16:58.113096832Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 26 18:16:58.113110 containerd[1597]: time="2026-01-26T18:16:58.113107462Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 26 18:16:58.113263 containerd[1597]: time="2026-01-26T18:16:58.113117641Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 26 18:16:58.113263 containerd[1597]: time="2026-01-26T18:16:58.113127028Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 26 18:16:58.113263 containerd[1597]: time="2026-01-26T18:16:58.113153258Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 26 18:16:58.113263 containerd[1597]: time="2026-01-26T18:16:58.113203161Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 26 18:16:58.113263 containerd[1597]: time="2026-01-26T18:16:58.113216475Z" level=info msg="Start snapshots syncer" Jan 26 18:16:58.113357 containerd[1597]: time="2026-01-26T18:16:58.113278491Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 26 18:16:58.113558 containerd[1597]: time="2026-01-26T18:16:58.113518509Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 26 18:16:58.113785 containerd[1597]: time="2026-01-26T18:16:58.113702884Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 26 18:16:58.113809 containerd[1597]: time="2026-01-26T18:16:58.113784165Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 26 18:16:58.114077 containerd[1597]: time="2026-01-26T18:16:58.113967367Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 26 18:16:58.114077 containerd[1597]: time="2026-01-26T18:16:58.114021158Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 26 18:16:58.114077 containerd[1597]: time="2026-01-26T18:16:58.114033060Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 26 18:16:58.114077 containerd[1597]: time="2026-01-26T18:16:58.114042657Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 26 18:16:58.114077 containerd[1597]: time="2026-01-26T18:16:58.114061263Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 26 18:16:58.114077 containerd[1597]: time="2026-01-26T18:16:58.114071551Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 26 18:16:58.114077 containerd[1597]: time="2026-01-26T18:16:58.114081249Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 26 18:16:58.114209 containerd[1597]: time="2026-01-26T18:16:58.114090567Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 26 18:16:58.114209 containerd[1597]: time="2026-01-26T18:16:58.114101698Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 26 18:16:58.114209 containerd[1597]: time="2026-01-26T18:16:58.114162612Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 26 18:16:58.114209 containerd[1597]: time="2026-01-26T18:16:58.114176327Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 26 18:16:58.114209 containerd[1597]: time="2026-01-26T18:16:58.114185194Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 26 18:16:58.114209 containerd[1597]: time="2026-01-26T18:16:58.114194952Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 26 18:16:58.114209 containerd[1597]: time="2026-01-26T18:16:58.114202706Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 26 18:16:58.114327 containerd[1597]: time="2026-01-26T18:16:58.114211864Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 26 18:16:58.114327 containerd[1597]: time="2026-01-26T18:16:58.114225539Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 26 18:16:58.114327 containerd[1597]: time="2026-01-26T18:16:58.114243713Z" level=info msg="runtime interface created" Jan 26 18:16:58.114327 containerd[1597]: time="2026-01-26T18:16:58.114249273Z" level=info msg="created NRI interface" Jan 26 18:16:58.114327 containerd[1597]: time="2026-01-26T18:16:58.114257549Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 26 18:16:58.114327 containerd[1597]: time="2026-01-26T18:16:58.114268840Z" level=info msg="Connect containerd service" Jan 26 18:16:58.114327 containerd[1597]: time="2026-01-26T18:16:58.114287354Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 26 18:16:58.115203 containerd[1597]: time="2026-01-26T18:16:58.115094030Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 26 18:16:58.208709 containerd[1597]: time="2026-01-26T18:16:58.208218199Z" level=info msg="Start subscribing containerd event" Jan 26 18:16:58.208709 containerd[1597]: time="2026-01-26T18:16:58.208327984Z" level=info msg="Start recovering state" Jan 26 18:16:58.208709 containerd[1597]: time="2026-01-26T18:16:58.208433872Z" level=info msg="Start event monitor" Jan 26 18:16:58.208709 containerd[1597]: time="2026-01-26T18:16:58.208447197Z" level=info msg="Start cni network conf syncer for default" Jan 26 18:16:58.208709 containerd[1597]: time="2026-01-26T18:16:58.208454781Z" level=info msg="Start streaming server" Jan 26 18:16:58.208709 containerd[1597]: time="2026-01-26T18:16:58.208462866Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 26 18:16:58.208709 containerd[1597]: time="2026-01-26T18:16:58.208470019Z" level=info msg="runtime interface starting up..." Jan 26 18:16:58.208709 containerd[1597]: time="2026-01-26T18:16:58.208475620Z" level=info msg="starting plugins..." Jan 26 18:16:58.208709 containerd[1597]: time="2026-01-26T18:16:58.208489385Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 26 18:16:58.209007 containerd[1597]: time="2026-01-26T18:16:58.208761733Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 26 18:16:58.209007 containerd[1597]: time="2026-01-26T18:16:58.208900112Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 26 18:16:58.210025 systemd[1]: Started containerd.service - containerd container runtime. Jan 26 18:16:58.213779 containerd[1597]: time="2026-01-26T18:16:58.212558651Z" level=info msg="containerd successfully booted in 0.119403s" Jan 26 18:16:58.245710 tar[1589]: linux-amd64/README.md Jan 26 18:16:58.264269 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 26 18:16:58.664195 systemd-networkd[1518]: eth0: Gained IPv6LL Jan 26 18:16:58.668227 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 26 18:16:58.674084 systemd[1]: Reached target network-online.target - Network is Online. Jan 26 18:16:58.680472 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 26 18:16:58.688277 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 26 18:16:58.707751 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 26 18:16:58.746202 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 26 18:16:58.752589 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 26 18:16:58.753161 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 26 18:16:58.759748 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 26 18:16:58.826100 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 26 18:16:58.833470 systemd[1]: Started sshd@0-10.0.0.64:22-10.0.0.1:44184.service - OpenSSH per-connection server daemon (10.0.0.1:44184). Jan 26 18:16:58.946134 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 44184 ssh2: RSA SHA256:FBsBDqAn2CSSEioDgYmIdR2sMG3PuCcnBTUzWqQs7BE Jan 26 18:16:58.948956 sshd-session[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:16:58.958968 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 26 18:16:58.966043 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 26 18:16:58.980176 systemd-logind[1579]: New session 1 of user core. Jan 26 18:16:58.999246 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 26 18:16:59.008488 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 26 18:16:59.046483 (systemd)[1703]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:16:59.051402 systemd-logind[1579]: New session 2 of user core. Jan 26 18:16:59.203485 systemd[1703]: Queued start job for default target default.target. Jan 26 18:16:59.217642 systemd[1703]: Created slice app.slice - User Application Slice. Jan 26 18:16:59.217743 systemd[1703]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 26 18:16:59.217760 systemd[1703]: Reached target paths.target - Paths. Jan 26 18:16:59.217939 systemd[1703]: Reached target timers.target - Timers. Jan 26 18:16:59.219975 systemd[1703]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 26 18:16:59.221292 systemd[1703]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 26 18:16:59.236324 systemd[1703]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 26 18:16:59.236425 systemd[1703]: Reached target sockets.target - Sockets. Jan 26 18:16:59.239423 systemd[1703]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 26 18:16:59.239725 systemd[1703]: Reached target basic.target - Basic System. Jan 26 18:16:59.239816 systemd[1703]: Reached target default.target - Main User Target. Jan 26 18:16:59.239934 systemd[1703]: Startup finished in 179ms. Jan 26 18:16:59.240065 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 26 18:16:59.254069 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 26 18:16:59.274723 systemd[1]: Started sshd@1-10.0.0.64:22-10.0.0.1:44196.service - OpenSSH per-connection server daemon (10.0.0.1:44196). Jan 26 18:16:59.352037 sshd[1717]: Accepted publickey for core from 10.0.0.1 port 44196 ssh2: RSA SHA256:FBsBDqAn2CSSEioDgYmIdR2sMG3PuCcnBTUzWqQs7BE Jan 26 18:16:59.354255 sshd-session[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:16:59.361618 systemd-logind[1579]: New session 3 of user core. Jan 26 18:16:59.370146 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 26 18:16:59.394393 sshd[1721]: Connection closed by 10.0.0.1 port 44196 Jan 26 18:16:59.394947 sshd-session[1717]: pam_unix(sshd:session): session closed for user core Jan 26 18:16:59.415716 systemd[1]: sshd@1-10.0.0.64:22-10.0.0.1:44196.service: Deactivated successfully. Jan 26 18:16:59.418211 systemd[1]: session-3.scope: Deactivated successfully. Jan 26 18:16:59.419727 systemd-logind[1579]: Session 3 logged out. Waiting for processes to exit. Jan 26 18:16:59.423381 systemd[1]: Started sshd@2-10.0.0.64:22-10.0.0.1:44204.service - OpenSSH per-connection server daemon (10.0.0.1:44204). Jan 26 18:16:59.429537 systemd-logind[1579]: Removed session 3. Jan 26 18:16:59.507533 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 44204 ssh2: RSA SHA256:FBsBDqAn2CSSEioDgYmIdR2sMG3PuCcnBTUzWqQs7BE Jan 26 18:16:59.509368 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:16:59.516353 systemd-logind[1579]: New session 4 of user core. Jan 26 18:16:59.531147 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 26 18:16:59.554930 sshd[1731]: Connection closed by 10.0.0.1 port 44204 Jan 26 18:16:59.553755 sshd-session[1727]: pam_unix(sshd:session): session closed for user core Jan 26 18:16:59.559723 systemd[1]: sshd@2-10.0.0.64:22-10.0.0.1:44204.service: Deactivated successfully. Jan 26 18:16:59.562452 systemd[1]: session-4.scope: Deactivated successfully. Jan 26 18:16:59.564112 systemd-logind[1579]: Session 4 logged out. Waiting for processes to exit. Jan 26 18:16:59.565773 systemd-logind[1579]: Removed session 4. Jan 26 18:16:59.658804 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 26 18:16:59.663288 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 26 18:16:59.667266 systemd[1]: Startup finished in 3.366s (kernel) + 7.380s (initrd) + 5.675s (userspace) = 16.423s. Jan 26 18:16:59.685268 (kubelet)[1741]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 26 18:17:00.219707 kubelet[1741]: E0126 18:17:00.219475 1741 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 26 18:17:00.223378 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 26 18:17:00.223765 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 26 18:17:00.224574 systemd[1]: kubelet.service: Consumed 1.030s CPU time, 265.9M memory peak. Jan 26 18:17:09.568145 systemd[1]: Started sshd@3-10.0.0.64:22-10.0.0.1:43628.service - OpenSSH per-connection server daemon (10.0.0.1:43628). Jan 26 18:17:09.650181 sshd[1755]: Accepted publickey for core from 10.0.0.1 port 43628 ssh2: RSA SHA256:FBsBDqAn2CSSEioDgYmIdR2sMG3PuCcnBTUzWqQs7BE Jan 26 18:17:09.652537 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:17:09.659996 systemd-logind[1579]: New session 5 of user core. Jan 26 18:17:09.674239 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 26 18:17:09.695461 sshd[1759]: Connection closed by 10.0.0.1 port 43628 Jan 26 18:17:09.695938 sshd-session[1755]: pam_unix(sshd:session): session closed for user core Jan 26 18:17:09.706029 systemd[1]: sshd@3-10.0.0.64:22-10.0.0.1:43628.service: Deactivated successfully. Jan 26 18:17:09.708370 systemd[1]: session-5.scope: Deactivated successfully. Jan 26 18:17:09.710053 systemd-logind[1579]: Session 5 logged out. Waiting for processes to exit. Jan 26 18:17:09.713259 systemd[1]: Started sshd@4-10.0.0.64:22-10.0.0.1:43644.service - OpenSSH per-connection server daemon (10.0.0.1:43644). Jan 26 18:17:09.714273 systemd-logind[1579]: Removed session 5. Jan 26 18:17:09.799746 sshd[1765]: Accepted publickey for core from 10.0.0.1 port 43644 ssh2: RSA SHA256:FBsBDqAn2CSSEioDgYmIdR2sMG3PuCcnBTUzWqQs7BE Jan 26 18:17:09.802476 sshd-session[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:17:09.810791 systemd-logind[1579]: New session 6 of user core. Jan 26 18:17:09.818184 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 26 18:17:09.835085 sshd[1770]: Connection closed by 10.0.0.1 port 43644 Jan 26 18:17:09.835785 sshd-session[1765]: pam_unix(sshd:session): session closed for user core Jan 26 18:17:09.847343 systemd[1]: sshd@4-10.0.0.64:22-10.0.0.1:43644.service: Deactivated successfully. Jan 26 18:17:09.850155 systemd[1]: session-6.scope: Deactivated successfully. Jan 26 18:17:09.851935 systemd-logind[1579]: Session 6 logged out. Waiting for processes to exit. Jan 26 18:17:09.855342 systemd[1]: Started sshd@5-10.0.0.64:22-10.0.0.1:43650.service - OpenSSH per-connection server daemon (10.0.0.1:43650). Jan 26 18:17:09.856581 systemd-logind[1579]: Removed session 6. Jan 26 18:17:09.941331 sshd[1776]: Accepted publickey for core from 10.0.0.1 port 43650 ssh2: RSA SHA256:FBsBDqAn2CSSEioDgYmIdR2sMG3PuCcnBTUzWqQs7BE Jan 26 18:17:09.943580 sshd-session[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:17:09.951799 systemd-logind[1579]: New session 7 of user core. Jan 26 18:17:09.968323 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 26 18:17:09.990551 sshd[1780]: Connection closed by 10.0.0.1 port 43650 Jan 26 18:17:09.992135 sshd-session[1776]: pam_unix(sshd:session): session closed for user core Jan 26 18:17:10.013664 systemd[1]: sshd@5-10.0.0.64:22-10.0.0.1:43650.service: Deactivated successfully. Jan 26 18:17:10.016247 systemd[1]: session-7.scope: Deactivated successfully. Jan 26 18:17:10.017635 systemd-logind[1579]: Session 7 logged out. Waiting for processes to exit. Jan 26 18:17:10.021477 systemd[1]: Started sshd@6-10.0.0.64:22-10.0.0.1:43660.service - OpenSSH per-connection server daemon (10.0.0.1:43660). Jan 26 18:17:10.022416 systemd-logind[1579]: Removed session 7. Jan 26 18:17:10.102275 sshd[1786]: Accepted publickey for core from 10.0.0.1 port 43660 ssh2: RSA SHA256:FBsBDqAn2CSSEioDgYmIdR2sMG3PuCcnBTUzWqQs7BE Jan 26 18:17:10.105133 sshd-session[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:17:10.112996 systemd-logind[1579]: New session 8 of user core. Jan 26 18:17:10.127092 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 26 18:17:10.157172 sudo[1791]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 26 18:17:10.157560 sudo[1791]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 26 18:17:10.179383 sudo[1791]: pam_unix(sudo:session): session closed for user root Jan 26 18:17:10.181251 sshd[1790]: Connection closed by 10.0.0.1 port 43660 Jan 26 18:17:10.181754 sshd-session[1786]: pam_unix(sshd:session): session closed for user core Jan 26 18:17:10.192774 systemd[1]: sshd@6-10.0.0.64:22-10.0.0.1:43660.service: Deactivated successfully. Jan 26 18:17:10.195572 systemd[1]: session-8.scope: Deactivated successfully. Jan 26 18:17:10.197186 systemd-logind[1579]: Session 8 logged out. Waiting for processes to exit. Jan 26 18:17:10.201233 systemd[1]: Started sshd@7-10.0.0.64:22-10.0.0.1:43662.service - OpenSSH per-connection server daemon (10.0.0.1:43662). Jan 26 18:17:10.202144 systemd-logind[1579]: Removed session 8. Jan 26 18:17:10.231776 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 26 18:17:10.234063 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 26 18:17:10.286969 sshd[1798]: Accepted publickey for core from 10.0.0.1 port 43662 ssh2: RSA SHA256:FBsBDqAn2CSSEioDgYmIdR2sMG3PuCcnBTUzWqQs7BE Jan 26 18:17:10.289194 sshd-session[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:17:10.296169 systemd-logind[1579]: New session 9 of user core. Jan 26 18:17:10.306107 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 26 18:17:10.328318 sudo[1807]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 26 18:17:10.328753 sudo[1807]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 26 18:17:10.365143 sudo[1807]: pam_unix(sudo:session): session closed for user root Jan 26 18:17:10.376450 sudo[1806]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 26 18:17:10.377012 sudo[1806]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 26 18:17:10.388812 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 26 18:17:10.452000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 26 18:17:10.453362 augenrules[1833]: No rules Jan 26 18:17:10.455686 kernel: kauditd_printk_skb: 146 callbacks suppressed Jan 26 18:17:10.455788 kernel: audit: type=1305 audit(1769451430.452:223): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 26 18:17:10.456598 systemd[1]: audit-rules.service: Deactivated successfully. Jan 26 18:17:10.452000 audit[1833]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd975b5610 a2=420 a3=0 items=0 ppid=1812 pid=1833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:10.475919 kernel: audit: type=1300 audit(1769451430.452:223): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd975b5610 a2=420 a3=0 items=0 ppid=1812 pid=1833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:10.476001 kernel: audit: type=1327 audit(1769451430.452:223): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 26 18:17:10.452000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 26 18:17:10.482494 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 26 18:17:10.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:17:10.484094 sudo[1806]: pam_unix(sudo:session): session closed for user root Jan 26 18:17:10.485801 sshd[1805]: Connection closed by 10.0.0.1 port 43662 Jan 26 18:17:10.486792 sshd-session[1798]: pam_unix(sshd:session): session closed for user core Jan 26 18:17:10.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:17:10.500526 kernel: audit: type=1130 audit(1769451430.482:224): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:17:10.500574 kernel: audit: type=1131 audit(1769451430.482:225): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:17:10.483000 audit[1806]: USER_END pid=1806 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 26 18:17:10.511189 kernel: audit: type=1106 audit(1769451430.483:226): pid=1806 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 26 18:17:10.511260 kernel: audit: type=1104 audit(1769451430.483:227): pid=1806 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 26 18:17:10.483000 audit[1806]: CRED_DISP pid=1806 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 26 18:17:10.490000 audit[1798]: USER_END pid=1798 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:17:10.534610 kernel: audit: type=1106 audit(1769451430.490:228): pid=1798 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:17:10.534671 kernel: audit: type=1104 audit(1769451430.490:229): pid=1798 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:17:10.490000 audit[1798]: CRED_DISP pid=1798 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:17:10.551170 systemd[1]: sshd@7-10.0.0.64:22-10.0.0.1:43662.service: Deactivated successfully. Jan 26 18:17:10.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.64:22-10.0.0.1:43662 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:17:10.554261 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 26 18:17:10.555947 systemd[1]: session-9.scope: Deactivated successfully. Jan 26 18:17:10.557193 systemd-logind[1579]: Session 9 logged out. Waiting for processes to exit. Jan 26 18:17:10.560769 systemd-logind[1579]: Removed session 9. Jan 26 18:17:10.562274 systemd[1]: Started sshd@8-10.0.0.64:22-10.0.0.1:43670.service - OpenSSH per-connection server daemon (10.0.0.1:43670). Jan 26 18:17:10.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:17:10.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.64:22-10.0.0.1:43670 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:17:10.565933 kernel: audit: type=1131 audit(1769451430.550:230): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.64:22-10.0.0.1:43662 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:17:10.566370 (kubelet)[1842]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 26 18:17:10.624000 audit[1846]: USER_ACCT pid=1846 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:17:10.626227 sshd[1846]: Accepted publickey for core from 10.0.0.1 port 43670 ssh2: RSA SHA256:FBsBDqAn2CSSEioDgYmIdR2sMG3PuCcnBTUzWqQs7BE Jan 26 18:17:10.626000 audit[1846]: CRED_ACQ pid=1846 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:17:10.626000 audit[1846]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe9d2d1bb0 a2=3 a3=0 items=0 ppid=1 pid=1846 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:10.626000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:17:10.628339 sshd-session[1846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:17:10.635040 kubelet[1842]: E0126 18:17:10.634941 1842 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 26 18:17:10.636276 systemd-logind[1579]: New session 10 of user core. Jan 26 18:17:10.645060 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 26 18:17:10.645354 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 26 18:17:10.645531 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 26 18:17:10.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 26 18:17:10.646125 systemd[1]: kubelet.service: Consumed 254ms CPU time, 111.3M memory peak. Jan 26 18:17:10.648000 audit[1846]: USER_START pid=1846 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:17:10.650000 audit[1858]: CRED_ACQ pid=1858 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:17:10.663000 audit[1859]: USER_ACCT pid=1859 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 26 18:17:10.664903 sudo[1859]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 26 18:17:10.664000 audit[1859]: CRED_REFR pid=1859 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 26 18:17:10.665415 sudo[1859]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 26 18:17:10.664000 audit[1859]: USER_START pid=1859 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 26 18:17:12.905129 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 26 18:17:12.927380 (dockerd)[1881]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 26 18:17:14.333630 dockerd[1881]: time="2026-01-26T18:17:14.333271984Z" level=info msg="Starting up" Jan 26 18:17:14.335799 dockerd[1881]: time="2026-01-26T18:17:14.335654952Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 26 18:17:14.422782 dockerd[1881]: time="2026-01-26T18:17:14.422604198Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 26 18:17:14.557318 systemd[1]: var-lib-docker-metacopy\x2dcheck3958013060-merged.mount: Deactivated successfully. Jan 26 18:17:14.601523 dockerd[1881]: time="2026-01-26T18:17:14.601201140Z" level=info msg="Loading containers: start." Jan 26 18:17:14.621922 kernel: Initializing XFRM netlink socket Jan 26 18:17:14.742000 audit[1934]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1934 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:17:14.742000 audit[1934]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd00e0b490 a2=0 a3=0 items=0 ppid=1881 pid=1934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:14.742000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 26 18:17:14.746000 audit[1936]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1936 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:17:14.746000 audit[1936]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffffe569020 a2=0 a3=0 items=0 ppid=1881 pid=1936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:14.746000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 26 18:17:14.751000 audit[1938]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1938 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:17:14.751000 audit[1938]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd47a4ae00 a2=0 a3=0 items=0 ppid=1881 pid=1938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:14.751000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 26 18:17:14.756000 audit[1940]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1940 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:17:14.756000 audit[1940]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd47a20540 a2=0 a3=0 items=0 ppid=1881 pid=1940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:14.756000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 26 18:17:14.760000 audit[1942]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1942 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:17:14.760000 audit[1942]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc418e09d0 a2=0 a3=0 items=0 ppid=1881 pid=1942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:14.760000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 26 18:17:14.765000 audit[1944]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1944 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:17:14.765000 audit[1944]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffdc269c860 a2=0 a3=0 items=0 ppid=1881 pid=1944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:14.765000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 26 18:17:14.770000 audit[1946]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1946 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:17:14.770000 audit[1946]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd6178be50 a2=0 a3=0 items=0 ppid=1881 pid=1946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:14.770000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 26 18:17:14.775000 audit[1948]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1948 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:17:14.775000 audit[1948]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffc0a9d0740 a2=0 a3=0 items=0 ppid=1881 pid=1948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:14.775000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 26 18:17:14.814000 audit[1951]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1951 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:17:14.814000 audit[1951]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffc4e166f80 a2=0 a3=0 items=0 ppid=1881 pid=1951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:14.814000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 26 18:17:14.819000 audit[1953]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1953 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:17:14.819000 audit[1953]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffdbe71efd0 a2=0 a3=0 items=0 ppid=1881 pid=1953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:14.819000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 26 18:17:14.824000 audit[1955]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1955 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:17:14.824000 audit[1955]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fff72d3cd40 a2=0 a3=0 items=0 ppid=1881 pid=1955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:14.824000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 26 18:17:14.828000 audit[1957]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1957 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:17:14.828000 audit[1957]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fffab37fa00 a2=0 a3=0 items=0 ppid=1881 pid=1957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:14.828000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 26 18:17:14.832000 audit[1959]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1959 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:17:14.832000 audit[1959]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffe6109a060 a2=0 a3=0 items=0 ppid=1881 pid=1959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:14.832000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 26 18:17:15.113000 audit[1989]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1989 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:17:15.113000 audit[1989]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff47f6ca70 a2=0 a3=0 items=0 ppid=1881 pid=1989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:15.113000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 26 18:17:15.132000 audit[1991]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1991 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:17:15.132000 audit[1991]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffcf702d030 a2=0 a3=0 items=0 ppid=1881 pid=1991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:15.132000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 26 18:17:15.139000 audit[1993]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1993 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:17:15.139000 audit[1993]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff1b4982d0 a2=0 a3=0 items=0 ppid=1881 pid=1993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:15.139000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 26 18:17:15.144000 audit[1995]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1995 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:17:15.144000 audit[1995]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc48fc0140 a2=0 a3=0 items=0 ppid=1881 pid=1995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:15.144000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 26 18:17:15.148000 audit[1997]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1997 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:17:15.148000 audit[1997]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc8968a710 a2=0 a3=0 items=0 ppid=1881 pid=1997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:15.148000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 26 18:17:15.152000 audit[1999]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1999 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:17:15.152000 audit[1999]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff7072af40 a2=0 a3=0 items=0 ppid=1881 pid=1999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:15.152000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 26 18:17:15.156000 audit[2001]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2001 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:17:15.156000 audit[2001]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffcc9be0320 a2=0 a3=0 items=0 ppid=1881 pid=2001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:15.156000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 26 18:17:15.161000 audit[2003]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2003 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:17:15.161000 audit[2003]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fff25f495d0 a2=0 a3=0 items=0 ppid=1881 pid=2003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:15.161000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 26 18:17:15.166000 audit[2005]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2005 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:17:15.166000 audit[2005]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffcf61debd0 a2=0 a3=0 items=0 ppid=1881 pid=2005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:15.166000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 26 18:17:15.173000 audit[2007]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2007 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:17:15.173000 audit[2007]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff92cbd650 a2=0 a3=0 items=0 ppid=1881 pid=2007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:15.173000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 26 18:17:15.178000 audit[2009]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2009 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:17:15.178000 audit[2009]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffc0314ba20 a2=0 a3=0 items=0 ppid=1881 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:15.178000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 26 18:17:15.183000 audit[2011]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2011 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:17:15.183000 audit[2011]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffc9ffaed20 a2=0 a3=0 items=0 ppid=1881 pid=2011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:15.183000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 26 18:17:15.189000 audit[2013]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2013 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:17:15.189000 audit[2013]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffd6f164580 a2=0 a3=0 items=0 ppid=1881 pid=2013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:15.189000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 26 18:17:15.203000 audit[2018]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2018 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:17:15.203000 audit[2018]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff11a04280 a2=0 a3=0 items=0 ppid=1881 pid=2018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:15.203000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 26 18:17:15.209000 audit[2020]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2020 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:17:15.209000 audit[2020]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffe50f0fb90 a2=0 a3=0 items=0 ppid=1881 pid=2020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:15.209000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 26 18:17:15.215000 audit[2022]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2022 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:17:15.215000 audit[2022]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffff3f18f40 a2=0 a3=0 items=0 ppid=1881 pid=2022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:15.215000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 26 18:17:15.224000 audit[2024]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2024 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:17:15.224000 audit[2024]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffed529e120 a2=0 a3=0 items=0 ppid=1881 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:15.224000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 26 18:17:15.228000 audit[2026]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2026 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:17:15.228000 audit[2026]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fffa7384db0 a2=0 a3=0 items=0 ppid=1881 pid=2026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:15.228000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 26 18:17:15.233000 audit[2028]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2028 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:17:15.233000 audit[2028]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd864a0c90 a2=0 a3=0 items=0 ppid=1881 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:15.233000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 26 18:17:15.268000 audit[2033]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2033 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:17:15.268000 audit[2033]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffe9b4778d0 a2=0 a3=0 items=0 ppid=1881 pid=2033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:15.268000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 26 18:17:15.274000 audit[2035]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2035 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:17:15.274000 audit[2035]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffe4de47a80 a2=0 a3=0 items=0 ppid=1881 pid=2035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:15.274000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 26 18:17:15.295000 audit[2043]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2043 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:17:15.295000 audit[2043]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffd7906b980 a2=0 a3=0 items=0 ppid=1881 pid=2043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:15.295000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 26 18:17:15.314000 audit[2049]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2049 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:17:15.314000 audit[2049]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffe0cefde10 a2=0 a3=0 items=0 ppid=1881 pid=2049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:15.314000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 26 18:17:15.321000 audit[2051]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2051 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:17:15.321000 audit[2051]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffc5d063040 a2=0 a3=0 items=0 ppid=1881 pid=2051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:15.321000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 26 18:17:15.326000 audit[2053]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2053 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:17:15.326000 audit[2053]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd5a553aa0 a2=0 a3=0 items=0 ppid=1881 pid=2053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:15.326000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 26 18:17:15.331000 audit[2055]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2055 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:17:15.331000 audit[2055]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffd65be87b0 a2=0 a3=0 items=0 ppid=1881 pid=2055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:15.331000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 26 18:17:15.337000 audit[2057]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2057 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:17:15.337000 audit[2057]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe30ea2a50 a2=0 a3=0 items=0 ppid=1881 pid=2057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:15.337000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 26 18:17:15.339394 systemd-networkd[1518]: docker0: Link UP Jan 26 18:17:15.350611 dockerd[1881]: time="2026-01-26T18:17:15.350412616Z" level=info msg="Loading containers: done." Jan 26 18:17:15.446794 dockerd[1881]: time="2026-01-26T18:17:15.446588842Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 26 18:17:15.447042 dockerd[1881]: time="2026-01-26T18:17:15.446926813Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 26 18:17:15.447175 dockerd[1881]: time="2026-01-26T18:17:15.447136113Z" level=info msg="Initializing buildkit" Jan 26 18:17:15.499783 dockerd[1881]: time="2026-01-26T18:17:15.499635482Z" level=info msg="Completed buildkit initialization" Jan 26 18:17:15.514517 dockerd[1881]: time="2026-01-26T18:17:15.514365329Z" level=info msg="Daemon has completed initialization" Jan 26 18:17:15.515589 dockerd[1881]: time="2026-01-26T18:17:15.515098858Z" level=info msg="API listen on /run/docker.sock" Jan 26 18:17:15.515560 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 26 18:17:15.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:17:15.521335 kernel: kauditd_printk_skb: 133 callbacks suppressed Jan 26 18:17:15.521414 kernel: audit: type=1130 audit(1769451435.515:282): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:17:15.538633 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1102060884-merged.mount: Deactivated successfully. Jan 26 18:17:16.438196 containerd[1597]: time="2026-01-26T18:17:16.438075516Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 26 18:17:17.238233 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4051003774.mount: Deactivated successfully. Jan 26 18:17:19.122596 containerd[1597]: time="2026-01-26T18:17:19.113942251Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:17:19.125914 containerd[1597]: time="2026-01-26T18:17:19.125443062Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=27401903" Jan 26 18:17:19.132310 containerd[1597]: time="2026-01-26T18:17:19.132067534Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:17:19.141878 containerd[1597]: time="2026-01-26T18:17:19.141673665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:17:19.144282 containerd[1597]: time="2026-01-26T18:17:19.144112592Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 2.705966665s" Jan 26 18:17:19.144282 containerd[1597]: time="2026-01-26T18:17:19.144236083Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 26 18:17:19.163592 containerd[1597]: time="2026-01-26T18:17:19.163491461Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 26 18:17:20.746142 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 26 18:17:20.790539 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 26 18:17:21.305193 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 26 18:17:21.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:17:21.322963 kernel: audit: type=1130 audit(1769451441.305:283): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:17:21.324156 (kubelet)[2172]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 26 18:17:21.413380 kubelet[2172]: E0126 18:17:21.413302 2172 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 26 18:17:21.419488 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 26 18:17:21.419715 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 26 18:17:21.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 26 18:17:21.426584 systemd[1]: kubelet.service: Consumed 449ms CPU time, 112.5M memory peak. Jan 26 18:17:21.439013 kernel: audit: type=1131 audit(1769451441.424:284): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 26 18:17:22.028117 containerd[1597]: time="2026-01-26T18:17:22.027742415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:17:22.029315 containerd[1597]: time="2026-01-26T18:17:22.029059093Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24985199" Jan 26 18:17:22.031256 containerd[1597]: time="2026-01-26T18:17:22.031131924Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:17:22.035879 containerd[1597]: time="2026-01-26T18:17:22.035690182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:17:22.036814 containerd[1597]: time="2026-01-26T18:17:22.036657793Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 2.873120987s" Jan 26 18:17:22.036814 containerd[1597]: time="2026-01-26T18:17:22.036721161Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 26 18:17:22.040716 containerd[1597]: time="2026-01-26T18:17:22.040304064Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 26 18:17:25.249956 containerd[1597]: time="2026-01-26T18:17:25.249590102Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:17:25.251182 containerd[1597]: time="2026-01-26T18:17:25.251000696Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19396939" Jan 26 18:17:25.253215 containerd[1597]: time="2026-01-26T18:17:25.253114643Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:17:25.257745 containerd[1597]: time="2026-01-26T18:17:25.257565926Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:17:25.258600 containerd[1597]: time="2026-01-26T18:17:25.258443212Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 3.218100016s" Jan 26 18:17:25.258600 containerd[1597]: time="2026-01-26T18:17:25.258530745Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 26 18:17:25.260385 containerd[1597]: time="2026-01-26T18:17:25.260350748Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 26 18:17:27.641972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount396869615.mount: Deactivated successfully. Jan 26 18:17:29.297381 containerd[1597]: time="2026-01-26T18:17:29.297100270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:17:29.298868 containerd[1597]: time="2026-01-26T18:17:29.298631881Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31158177" Jan 26 18:17:29.300596 containerd[1597]: time="2026-01-26T18:17:29.300538520Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:17:29.303220 containerd[1597]: time="2026-01-26T18:17:29.303163837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:17:29.304130 containerd[1597]: time="2026-01-26T18:17:29.304064051Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 4.043576667s" Jan 26 18:17:29.304276 containerd[1597]: time="2026-01-26T18:17:29.304133330Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 26 18:17:29.305937 containerd[1597]: time="2026-01-26T18:17:29.305751615Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 26 18:17:29.931030 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1259874850.mount: Deactivated successfully. Jan 26 18:17:31.590966 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 26 18:17:31.606891 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 26 18:17:31.852200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 26 18:17:31.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:17:31.864922 kernel: audit: type=1130 audit(1769451451.851:285): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:17:31.874321 (kubelet)[2254]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 26 18:17:32.177527 containerd[1597]: time="2026-01-26T18:17:32.174724395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:17:32.177527 containerd[1597]: time="2026-01-26T18:17:32.176237461Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=17569900" Jan 26 18:17:32.232088 containerd[1597]: time="2026-01-26T18:17:32.231906488Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:17:32.237571 containerd[1597]: time="2026-01-26T18:17:32.237460214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:17:32.241342 containerd[1597]: time="2026-01-26T18:17:32.241274372Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.93548678s" Jan 26 18:17:32.241342 containerd[1597]: time="2026-01-26T18:17:32.241302624Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 26 18:17:32.242568 containerd[1597]: time="2026-01-26T18:17:32.242410231Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 26 18:17:32.299655 kubelet[2254]: E0126 18:17:32.299521 2254 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 26 18:17:32.303566 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 26 18:17:32.304002 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 26 18:17:32.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 26 18:17:32.304732 systemd[1]: kubelet.service: Consumed 588ms CPU time, 111.2M memory peak. Jan 26 18:17:32.320974 kernel: audit: type=1131 audit(1769451452.303:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 26 18:17:32.668969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount790754708.mount: Deactivated successfully. Jan 26 18:17:32.678225 containerd[1597]: time="2026-01-26T18:17:32.678125499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 26 18:17:32.679989 containerd[1597]: time="2026-01-26T18:17:32.679753945Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 26 18:17:32.681896 containerd[1597]: time="2026-01-26T18:17:32.681730306Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 26 18:17:32.685063 containerd[1597]: time="2026-01-26T18:17:32.684933991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 26 18:17:32.686064 containerd[1597]: time="2026-01-26T18:17:32.685883309Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 443.044044ms" Jan 26 18:17:32.686064 containerd[1597]: time="2026-01-26T18:17:32.685953930Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 26 18:17:32.687344 containerd[1597]: time="2026-01-26T18:17:32.687125541Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 26 18:17:33.185335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2464390807.mount: Deactivated successfully. Jan 26 18:17:38.536591 containerd[1597]: time="2026-01-26T18:17:38.536427332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:17:38.537692 containerd[1597]: time="2026-01-26T18:17:38.537620978Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=56885499" Jan 26 18:17:38.539531 containerd[1597]: time="2026-01-26T18:17:38.539469510Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:17:38.543005 containerd[1597]: time="2026-01-26T18:17:38.542982323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:17:38.544080 containerd[1597]: time="2026-01-26T18:17:38.544004852Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 5.856693237s" Jan 26 18:17:38.544080 containerd[1597]: time="2026-01-26T18:17:38.544052650Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 26 18:17:41.728570 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 26 18:17:41.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:17:41.729144 systemd[1]: kubelet.service: Consumed 588ms CPU time, 111.2M memory peak. Jan 26 18:17:41.733159 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 26 18:17:41.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:17:41.755442 kernel: audit: type=1130 audit(1769451461.728:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:17:41.755690 kernel: audit: type=1131 audit(1769451461.728:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:17:41.772367 systemd[1]: Reload requested from client PID 2348 ('systemctl') (unit session-10.scope)... Jan 26 18:17:41.772416 systemd[1]: Reloading... Jan 26 18:17:42.203906 zram_generator::config[2398]: No configuration found. Jan 26 18:17:42.480630 systemd[1]: Reloading finished in 707 ms. Jan 26 18:17:42.509000 audit: BPF prog-id=63 op=LOAD Jan 26 18:17:42.516035 kernel: audit: type=1334 audit(1769451462.509:289): prog-id=63 op=LOAD Jan 26 18:17:42.509000 audit: BPF prog-id=43 op=UNLOAD Jan 26 18:17:42.509000 audit: BPF prog-id=64 op=LOAD Jan 26 18:17:42.526583 kernel: audit: type=1334 audit(1769451462.509:290): prog-id=43 op=UNLOAD Jan 26 18:17:42.526647 kernel: audit: type=1334 audit(1769451462.509:291): prog-id=64 op=LOAD Jan 26 18:17:42.526671 kernel: audit: type=1334 audit(1769451462.509:292): prog-id=65 op=LOAD Jan 26 18:17:42.509000 audit: BPF prog-id=65 op=LOAD Jan 26 18:17:42.529916 kernel: audit: type=1334 audit(1769451462.509:293): prog-id=44 op=UNLOAD Jan 26 18:17:42.509000 audit: BPF prog-id=44 op=UNLOAD Jan 26 18:17:42.509000 audit: BPF prog-id=45 op=UNLOAD Jan 26 18:17:42.537322 kernel: audit: type=1334 audit(1769451462.509:294): prog-id=45 op=UNLOAD Jan 26 18:17:42.510000 audit: BPF prog-id=66 op=LOAD Jan 26 18:17:42.540988 kernel: audit: type=1334 audit(1769451462.510:295): prog-id=66 op=LOAD Jan 26 18:17:42.541026 kernel: audit: type=1334 audit(1769451462.510:296): prog-id=59 op=UNLOAD Jan 26 18:17:42.510000 audit: BPF prog-id=59 op=UNLOAD Jan 26 18:17:42.515000 audit: BPF prog-id=67 op=LOAD Jan 26 18:17:42.515000 audit: BPF prog-id=50 op=UNLOAD Jan 26 18:17:42.515000 audit: BPF prog-id=68 op=LOAD Jan 26 18:17:42.515000 audit: BPF prog-id=69 op=LOAD Jan 26 18:17:42.515000 audit: BPF prog-id=51 op=UNLOAD Jan 26 18:17:42.515000 audit: BPF prog-id=52 op=UNLOAD Jan 26 18:17:42.515000 audit: BPF prog-id=70 op=LOAD Jan 26 18:17:42.515000 audit: BPF prog-id=55 op=UNLOAD Jan 26 18:17:42.516000 audit: BPF prog-id=71 op=LOAD Jan 26 18:17:42.516000 audit: BPF prog-id=72 op=LOAD Jan 26 18:17:42.516000 audit: BPF prog-id=56 op=UNLOAD Jan 26 18:17:42.516000 audit: BPF prog-id=57 op=UNLOAD Jan 26 18:17:42.517000 audit: BPF prog-id=73 op=LOAD Jan 26 18:17:42.517000 audit: BPF prog-id=46 op=UNLOAD Jan 26 18:17:42.517000 audit: BPF prog-id=74 op=LOAD Jan 26 18:17:42.517000 audit: BPF prog-id=75 op=LOAD Jan 26 18:17:42.517000 audit: BPF prog-id=47 op=UNLOAD Jan 26 18:17:42.517000 audit: BPF prog-id=48 op=UNLOAD Jan 26 18:17:42.518000 audit: BPF prog-id=76 op=LOAD Jan 26 18:17:42.518000 audit: BPF prog-id=49 op=UNLOAD Jan 26 18:17:42.520000 audit: BPF prog-id=77 op=LOAD Jan 26 18:17:42.521000 audit: BPF prog-id=60 op=UNLOAD Jan 26 18:17:42.521000 audit: BPF prog-id=78 op=LOAD Jan 26 18:17:42.521000 audit: BPF prog-id=79 op=LOAD Jan 26 18:17:42.521000 audit: BPF prog-id=61 op=UNLOAD Jan 26 18:17:42.521000 audit: BPF prog-id=62 op=UNLOAD Jan 26 18:17:42.522000 audit: BPF prog-id=80 op=LOAD Jan 26 18:17:42.554000 audit: BPF prog-id=58 op=UNLOAD Jan 26 18:17:42.554000 audit: BPF prog-id=81 op=LOAD Jan 26 18:17:42.554000 audit: BPF prog-id=82 op=LOAD Jan 26 18:17:42.554000 audit: BPF prog-id=53 op=UNLOAD Jan 26 18:17:42.554000 audit: BPF prog-id=54 op=UNLOAD Jan 26 18:17:42.580585 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 26 18:17:42.580790 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 26 18:17:42.581296 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 26 18:17:42.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 26 18:17:42.581374 systemd[1]: kubelet.service: Consumed 234ms CPU time, 98.5M memory peak. Jan 26 18:17:42.583286 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 26 18:17:42.850100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 26 18:17:42.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:17:42.875407 (kubelet)[2443]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 26 18:17:43.274925 update_engine[1581]: I20260126 18:17:43.273480 1581 update_attempter.cc:509] Updating boot flags... Jan 26 18:17:43.339431 kubelet[2443]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 18:17:43.339431 kubelet[2443]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 26 18:17:43.339431 kubelet[2443]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 18:17:43.340468 kubelet[2443]: I0126 18:17:43.339568 2443 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 26 18:17:44.004780 kubelet[2443]: I0126 18:17:44.004620 2443 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 26 18:17:44.004780 kubelet[2443]: I0126 18:17:44.004741 2443 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 26 18:17:44.005202 kubelet[2443]: I0126 18:17:44.005143 2443 server.go:954] "Client rotation is on, will bootstrap in background" Jan 26 18:17:44.041284 kubelet[2443]: E0126 18:17:44.041172 2443 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.64:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Jan 26 18:17:44.043694 kubelet[2443]: I0126 18:17:44.043636 2443 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 26 18:17:44.052388 kubelet[2443]: I0126 18:17:44.052314 2443 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 26 18:17:44.059558 kubelet[2443]: I0126 18:17:44.059470 2443 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 26 18:17:44.060872 kubelet[2443]: I0126 18:17:44.060770 2443 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 26 18:17:44.061144 kubelet[2443]: I0126 18:17:44.060907 2443 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 26 18:17:44.061144 kubelet[2443]: I0126 18:17:44.061124 2443 topology_manager.go:138] "Creating topology manager with none policy" Jan 26 18:17:44.061144 kubelet[2443]: I0126 18:17:44.061133 2443 container_manager_linux.go:304] "Creating device plugin manager" Jan 26 18:17:44.061394 kubelet[2443]: I0126 18:17:44.061286 2443 state_mem.go:36] "Initialized new in-memory state store" Jan 26 18:17:44.067166 kubelet[2443]: I0126 18:17:44.067079 2443 kubelet.go:446] "Attempting to sync node with API server" Jan 26 18:17:44.067214 kubelet[2443]: I0126 18:17:44.067170 2443 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 26 18:17:44.067214 kubelet[2443]: I0126 18:17:44.067190 2443 kubelet.go:352] "Adding apiserver pod source" Jan 26 18:17:44.067214 kubelet[2443]: I0126 18:17:44.067200 2443 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 26 18:17:44.071207 kubelet[2443]: I0126 18:17:44.071096 2443 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 26 18:17:44.073244 kubelet[2443]: I0126 18:17:44.073134 2443 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 26 18:17:44.073518 kubelet[2443]: W0126 18:17:44.073409 2443 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused Jan 26 18:17:44.073518 kubelet[2443]: E0126 18:17:44.073495 2443 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Jan 26 18:17:44.074020 kubelet[2443]: W0126 18:17:44.073992 2443 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused Jan 26 18:17:44.074104 kubelet[2443]: E0126 18:17:44.074090 2443 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Jan 26 18:17:44.076567 kubelet[2443]: W0126 18:17:44.076118 2443 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 26 18:17:44.079028 kubelet[2443]: I0126 18:17:44.078989 2443 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 26 18:17:44.079078 kubelet[2443]: I0126 18:17:44.079052 2443 server.go:1287] "Started kubelet" Jan 26 18:17:44.079591 kubelet[2443]: I0126 18:17:44.079525 2443 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 26 18:17:44.082954 kubelet[2443]: I0126 18:17:44.082078 2443 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 26 18:17:44.083314 kubelet[2443]: I0126 18:17:44.083239 2443 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 26 18:17:44.083314 kubelet[2443]: I0126 18:17:44.083314 2443 server.go:479] "Adding debug handlers to kubelet server" Jan 26 18:17:44.083503 kubelet[2443]: I0126 18:17:44.083331 2443 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 26 18:17:44.085538 kubelet[2443]: I0126 18:17:44.085519 2443 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 26 18:17:44.088624 kubelet[2443]: E0126 18:17:44.086609 2443 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.64:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.64:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188e5abb4bb9b191 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-26 18:17:44.079028625 +0000 UTC m=+1.185993243,LastTimestamp:2026-01-26 18:17:44.079028625 +0000 UTC m=+1.185993243,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 26 18:17:44.089050 kubelet[2443]: E0126 18:17:44.088994 2443 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 26 18:17:44.089199 kubelet[2443]: I0126 18:17:44.089121 2443 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 26 18:17:44.089424 kubelet[2443]: I0126 18:17:44.089354 2443 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 26 18:17:44.089457 kubelet[2443]: I0126 18:17:44.089434 2443 reconciler.go:26] "Reconciler: start to sync state" Jan 26 18:17:44.088000 audit[2472]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2472 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:17:44.088000 audit[2472]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd52c73cf0 a2=0 a3=0 items=0 ppid=2443 pid=2472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:44.088000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 26 18:17:44.090300 kubelet[2443]: W0126 18:17:44.090202 2443 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused Jan 26 18:17:44.090300 kubelet[2443]: E0126 18:17:44.090288 2443 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Jan 26 18:17:44.090503 kubelet[2443]: E0126 18:17:44.090320 2443 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.64:6443: connect: connection refused" interval="200ms" Jan 26 18:17:44.090643 kubelet[2443]: I0126 18:17:44.090588 2443 factory.go:221] Registration of the systemd container factory successfully Jan 26 18:17:44.090760 kubelet[2443]: I0126 18:17:44.090693 2443 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 26 18:17:44.091321 kubelet[2443]: E0126 18:17:44.091265 2443 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 26 18:17:44.090000 audit[2473]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2473 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:17:44.090000 audit[2473]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef50375d0 a2=0 a3=0 items=0 ppid=2443 pid=2473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:44.090000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 26 18:17:44.092546 kubelet[2443]: I0126 18:17:44.092506 2443 factory.go:221] Registration of the containerd container factory successfully Jan 26 18:17:44.095000 audit[2475]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2475 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:17:44.095000 audit[2475]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc36277090 a2=0 a3=0 items=0 ppid=2443 pid=2475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:44.095000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 26 18:17:44.099000 audit[2477]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2477 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:17:44.099000 audit[2477]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fffe59405d0 a2=0 a3=0 items=0 ppid=2443 pid=2477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:44.099000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 26 18:17:44.115000 audit[2484]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2484 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:17:44.115000 audit[2484]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffc1a50b7e0 a2=0 a3=0 items=0 ppid=2443 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:44.115000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 26 18:17:44.116368 kubelet[2443]: I0126 18:17:44.116290 2443 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 26 18:17:44.118000 audit[2487]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2487 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:17:44.118000 audit[2487]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffcadb962d0 a2=0 a3=0 items=0 ppid=2443 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:44.118000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 26 18:17:44.118000 audit[2488]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2488 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:17:44.118000 audit[2488]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdf4243870 a2=0 a3=0 items=0 ppid=2443 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:44.118000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 26 18:17:44.119775 kubelet[2443]: I0126 18:17:44.119393 2443 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 26 18:17:44.119775 kubelet[2443]: I0126 18:17:44.119621 2443 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 26 18:17:44.119775 kubelet[2443]: I0126 18:17:44.119645 2443 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 26 18:17:44.119775 kubelet[2443]: I0126 18:17:44.119652 2443 kubelet.go:2382] "Starting kubelet main sync loop" Jan 26 18:17:44.119962 kubelet[2443]: E0126 18:17:44.119799 2443 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 26 18:17:44.120880 kubelet[2443]: W0126 18:17:44.120396 2443 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused Jan 26 18:17:44.120880 kubelet[2443]: E0126 18:17:44.120442 2443 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Jan 26 18:17:44.120000 audit[2490]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2490 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:17:44.120000 audit[2490]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffff117f6b0 a2=0 a3=0 items=0 ppid=2443 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:44.120000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 26 18:17:44.122215 kubelet[2443]: I0126 18:17:44.122127 2443 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 26 18:17:44.122303 kubelet[2443]: I0126 18:17:44.122242 2443 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 26 18:17:44.122303 kubelet[2443]: I0126 18:17:44.122260 2443 state_mem.go:36] "Initialized new in-memory state store" Jan 26 18:17:44.121000 audit[2489]: NETFILTER_CFG table=mangle:50 family=10 entries=1 op=nft_register_chain pid=2489 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:17:44.121000 audit[2489]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd18d487d0 a2=0 a3=0 items=0 ppid=2443 pid=2489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:44.121000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 26 18:17:44.123000 audit[2491]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2491 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:17:44.123000 audit[2491]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff678bcd30 a2=0 a3=0 items=0 ppid=2443 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:44.123000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 26 18:17:44.125000 audit[2492]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2492 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:17:44.125000 audit[2492]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc341ccf10 a2=0 a3=0 items=0 ppid=2443 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:44.125000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 26 18:17:44.126783 kubelet[2443]: I0126 18:17:44.126694 2443 policy_none.go:49] "None policy: Start" Jan 26 18:17:44.126783 kubelet[2443]: I0126 18:17:44.126779 2443 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 26 18:17:44.127007 kubelet[2443]: I0126 18:17:44.126795 2443 state_mem.go:35] "Initializing new in-memory state store" Jan 26 18:17:44.127000 audit[2493]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2493 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:17:44.127000 audit[2493]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe6ad564b0 a2=0 a3=0 items=0 ppid=2443 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:44.127000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 26 18:17:44.135621 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 26 18:17:44.161545 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 26 18:17:44.166588 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 26 18:17:44.190038 kubelet[2443]: E0126 18:17:44.189956 2443 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 26 18:17:44.193281 kubelet[2443]: I0126 18:17:44.193156 2443 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 26 18:17:44.193410 kubelet[2443]: I0126 18:17:44.193373 2443 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 26 18:17:44.193410 kubelet[2443]: I0126 18:17:44.193389 2443 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 26 18:17:44.193693 kubelet[2443]: I0126 18:17:44.193598 2443 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 26 18:17:44.195295 kubelet[2443]: E0126 18:17:44.195115 2443 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 26 18:17:44.195295 kubelet[2443]: E0126 18:17:44.195175 2443 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 26 18:17:44.230493 systemd[1]: Created slice kubepods-burstable-pode21cd28fce555573da7e6541dec4d111.slice - libcontainer container kubepods-burstable-pode21cd28fce555573da7e6541dec4d111.slice. Jan 26 18:17:44.244077 kubelet[2443]: E0126 18:17:44.243983 2443 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 26 18:17:44.247884 systemd[1]: Created slice kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice - libcontainer container kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice. Jan 26 18:17:44.261520 kubelet[2443]: E0126 18:17:44.261319 2443 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 26 18:17:44.266641 systemd[1]: Created slice kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice - libcontainer container kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice. Jan 26 18:17:44.268971 kubelet[2443]: E0126 18:17:44.268916 2443 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 26 18:17:44.291241 kubelet[2443]: I0126 18:17:44.291070 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e21cd28fce555573da7e6541dec4d111-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e21cd28fce555573da7e6541dec4d111\") " pod="kube-system/kube-apiserver-localhost" Jan 26 18:17:44.291575 kubelet[2443]: E0126 18:17:44.291464 2443 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.64:6443: connect: connection refused" interval="400ms" Jan 26 18:17:44.296379 kubelet[2443]: I0126 18:17:44.296323 2443 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 26 18:17:44.296799 kubelet[2443]: E0126 18:17:44.296618 2443 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.64:6443/api/v1/nodes\": dial tcp 10.0.0.64:6443: connect: connection refused" node="localhost" Jan 26 18:17:44.393201 kubelet[2443]: I0126 18:17:44.392965 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 26 18:17:44.393201 kubelet[2443]: I0126 18:17:44.393070 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 26 18:17:44.393201 kubelet[2443]: I0126 18:17:44.393131 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 26 18:17:44.393201 kubelet[2443]: I0126 18:17:44.393154 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 26 18:17:44.393201 kubelet[2443]: I0126 18:17:44.393168 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 26 18:17:44.394335 kubelet[2443]: I0126 18:17:44.393216 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e21cd28fce555573da7e6541dec4d111-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e21cd28fce555573da7e6541dec4d111\") " pod="kube-system/kube-apiserver-localhost" Jan 26 18:17:44.394335 kubelet[2443]: I0126 18:17:44.393232 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e21cd28fce555573da7e6541dec4d111-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e21cd28fce555573da7e6541dec4d111\") " pod="kube-system/kube-apiserver-localhost" Jan 26 18:17:44.394335 kubelet[2443]: I0126 18:17:44.393247 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 26 18:17:44.505989 kubelet[2443]: I0126 18:17:44.505895 2443 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 26 18:17:44.506470 kubelet[2443]: E0126 18:17:44.506414 2443 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.64:6443/api/v1/nodes\": dial tcp 10.0.0.64:6443: connect: connection refused" node="localhost" Jan 26 18:17:44.548403 kubelet[2443]: E0126 18:17:44.547956 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:17:44.552266 containerd[1597]: time="2026-01-26T18:17:44.551801592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e21cd28fce555573da7e6541dec4d111,Namespace:kube-system,Attempt:0,}" Jan 26 18:17:44.563793 kubelet[2443]: E0126 18:17:44.563661 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:17:44.566933 containerd[1597]: time="2026-01-26T18:17:44.566659831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,}" Jan 26 18:17:44.569535 kubelet[2443]: E0126 18:17:44.569500 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:17:44.570124 containerd[1597]: time="2026-01-26T18:17:44.570052753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,}" Jan 26 18:17:44.693897 kubelet[2443]: E0126 18:17:44.693348 2443 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.64:6443: connect: connection refused" interval="800ms" Jan 26 18:17:44.736213 containerd[1597]: time="2026-01-26T18:17:44.735748570Z" level=info msg="connecting to shim 9a69fe2782d9ebc778a4c3e2168138cf7e1dbdf7a1c973c4218fcf121f13e647" address="unix:///run/containerd/s/c49f810387fadc9d06f427526b2e24e96820111294552a9de2ceb7109c03f1ab" namespace=k8s.io protocol=ttrpc version=3 Jan 26 18:17:44.759167 containerd[1597]: time="2026-01-26T18:17:44.758693456Z" level=info msg="connecting to shim ac7bc209868e4a244a5edd5aa40cf452d5a69ca74b3a749cb6eb332ed6fb5e36" address="unix:///run/containerd/s/b5a4cf606a9e72e8103f2c672bf4b1f01eceee15085e8953be89fa4e2b8ad52f" namespace=k8s.io protocol=ttrpc version=3 Jan 26 18:17:44.766460 containerd[1597]: time="2026-01-26T18:17:44.766395742Z" level=info msg="connecting to shim a093f40fd15f3fd92101cf72e03615e3e7e067313550c6f05b13634eff5c0683" address="unix:///run/containerd/s/1757625a20f19c28b53e6fd87fde4e5b6903d3a59a4bf2f2dc69f58aed00888a" namespace=k8s.io protocol=ttrpc version=3 Jan 26 18:17:44.796670 systemd[1]: Started cri-containerd-9a69fe2782d9ebc778a4c3e2168138cf7e1dbdf7a1c973c4218fcf121f13e647.scope - libcontainer container 9a69fe2782d9ebc778a4c3e2168138cf7e1dbdf7a1c973c4218fcf121f13e647. Jan 26 18:17:44.844000 audit: BPF prog-id=83 op=LOAD Jan 26 18:17:44.846000 audit: BPF prog-id=84 op=LOAD Jan 26 18:17:44.846000 audit[2533]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2510 pid=2533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:44.846000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961363966653237383264396562633737386134633365323136383133 Jan 26 18:17:44.846000 audit: BPF prog-id=84 op=UNLOAD Jan 26 18:17:44.846000 audit[2533]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2510 pid=2533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:44.846000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961363966653237383264396562633737386134633365323136383133 Jan 26 18:17:44.846000 audit: BPF prog-id=85 op=LOAD Jan 26 18:17:44.846000 audit[2533]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2510 pid=2533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:44.846000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961363966653237383264396562633737386134633365323136383133 Jan 26 18:17:44.847000 audit: BPF prog-id=86 op=LOAD Jan 26 18:17:44.847000 audit[2533]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2510 pid=2533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:44.847000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961363966653237383264396562633737386134633365323136383133 Jan 26 18:17:44.848000 audit: BPF prog-id=86 op=UNLOAD Jan 26 18:17:44.848000 audit[2533]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2510 pid=2533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:44.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961363966653237383264396562633737386134633365323136383133 Jan 26 18:17:44.849000 audit: BPF prog-id=85 op=UNLOAD Jan 26 18:17:44.849000 audit[2533]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2510 pid=2533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:44.849000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961363966653237383264396562633737386134633365323136383133 Jan 26 18:17:44.849000 audit: BPF prog-id=87 op=LOAD Jan 26 18:17:44.849000 audit[2533]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2510 pid=2533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:44.849000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961363966653237383264396562633737386134633365323136383133 Jan 26 18:17:44.897121 systemd[1]: Started cri-containerd-a093f40fd15f3fd92101cf72e03615e3e7e067313550c6f05b13634eff5c0683.scope - libcontainer container a093f40fd15f3fd92101cf72e03615e3e7e067313550c6f05b13634eff5c0683. Jan 26 18:17:44.903261 systemd[1]: Started cri-containerd-ac7bc209868e4a244a5edd5aa40cf452d5a69ca74b3a749cb6eb332ed6fb5e36.scope - libcontainer container ac7bc209868e4a244a5edd5aa40cf452d5a69ca74b3a749cb6eb332ed6fb5e36. Jan 26 18:17:44.907410 kubelet[2443]: W0126 18:17:44.907290 2443 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused Jan 26 18:17:44.907487 kubelet[2443]: E0126 18:17:44.907432 2443 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Jan 26 18:17:45.176753 kernel: hrtimer: interrupt took 10646833 ns Jan 26 18:17:45.177427 kubelet[2443]: I0126 18:17:44.911763 2443 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 26 18:17:45.177427 kubelet[2443]: E0126 18:17:45.062336 2443 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.64:6443/api/v1/nodes\": dial tcp 10.0.0.64:6443: connect: connection refused" node="localhost" Jan 26 18:17:45.177427 kubelet[2443]: W0126 18:17:45.063396 2443 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused Jan 26 18:17:45.177427 kubelet[2443]: E0126 18:17:45.068431 2443 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Jan 26 18:17:45.204000 audit: BPF prog-id=88 op=LOAD Jan 26 18:17:45.206000 audit: BPF prog-id=89 op=LOAD Jan 26 18:17:45.206000 audit[2568]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2531 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:45.206000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130393366343066643135663366643932313031636637326530333631 Jan 26 18:17:45.206000 audit: BPF prog-id=89 op=UNLOAD Jan 26 18:17:45.206000 audit[2568]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2531 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:45.206000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130393366343066643135663366643932313031636637326530333631 Jan 26 18:17:45.207000 audit: BPF prog-id=90 op=LOAD Jan 26 18:17:45.207000 audit[2568]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2531 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:45.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130393366343066643135663366643932313031636637326530333631 Jan 26 18:17:45.208000 audit: BPF prog-id=91 op=LOAD Jan 26 18:17:45.208000 audit: BPF prog-id=92 op=LOAD Jan 26 18:17:45.208000 audit[2568]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2531 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:45.208000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130393366343066643135663366643932313031636637326530333631 Jan 26 18:17:45.208000 audit: BPF prog-id=92 op=UNLOAD Jan 26 18:17:45.208000 audit[2568]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2531 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:45.208000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130393366343066643135663366643932313031636637326530333631 Jan 26 18:17:45.208000 audit: BPF prog-id=90 op=UNLOAD Jan 26 18:17:45.208000 audit[2568]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2531 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:45.208000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130393366343066643135663366643932313031636637326530333631 Jan 26 18:17:45.208000 audit: BPF prog-id=93 op=LOAD Jan 26 18:17:45.208000 audit[2563]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=2527 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:45.208000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163376263323039383638653461323434613565646435616134306366 Jan 26 18:17:45.208000 audit: BPF prog-id=93 op=UNLOAD Jan 26 18:17:45.208000 audit[2563]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2527 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:45.208000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163376263323039383638653461323434613565646435616134306366 Jan 26 18:17:45.208000 audit: BPF prog-id=94 op=LOAD Jan 26 18:17:45.208000 audit[2563]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=2527 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:45.208000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163376263323039383638653461323434613565646435616134306366 Jan 26 18:17:45.209000 audit: BPF prog-id=95 op=LOAD Jan 26 18:17:45.209000 audit[2563]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=2527 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:45.209000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163376263323039383638653461323434613565646435616134306366 Jan 26 18:17:45.209000 audit: BPF prog-id=95 op=UNLOAD Jan 26 18:17:45.209000 audit[2563]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2527 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:45.209000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163376263323039383638653461323434613565646435616134306366 Jan 26 18:17:45.209000 audit: BPF prog-id=94 op=UNLOAD Jan 26 18:17:45.209000 audit[2563]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2527 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:45.209000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163376263323039383638653461323434613565646435616134306366 Jan 26 18:17:45.208000 audit: BPF prog-id=96 op=LOAD Jan 26 18:17:45.208000 audit[2568]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2531 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:45.208000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130393366343066643135663366643932313031636637326530333631 Jan 26 18:17:45.209000 audit: BPF prog-id=97 op=LOAD Jan 26 18:17:45.209000 audit[2563]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=2527 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:45.209000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163376263323039383638653461323434613565646435616134306366 Jan 26 18:17:45.287028 containerd[1597]: time="2026-01-26T18:17:45.286977814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a69fe2782d9ebc778a4c3e2168138cf7e1dbdf7a1c973c4218fcf121f13e647\"" Jan 26 18:17:45.292009 kubelet[2443]: E0126 18:17:45.291929 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:17:45.299521 containerd[1597]: time="2026-01-26T18:17:45.299363048Z" level=info msg="CreateContainer within sandbox \"9a69fe2782d9ebc778a4c3e2168138cf7e1dbdf7a1c973c4218fcf121f13e647\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 26 18:17:45.324424 containerd[1597]: time="2026-01-26T18:17:45.324319565Z" level=info msg="Container 6eceb2b7dfe0c5a09fce6cb2820ff3e1cf5d2bc43728f8ad1d422a1d8a16a772: CDI devices from CRI Config.CDIDevices: []" Jan 26 18:17:45.328423 kubelet[2443]: W0126 18:17:45.328072 2443 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused Jan 26 18:17:45.328423 kubelet[2443]: E0126 18:17:45.328342 2443 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Jan 26 18:17:45.334664 containerd[1597]: time="2026-01-26T18:17:45.334598339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e21cd28fce555573da7e6541dec4d111,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac7bc209868e4a244a5edd5aa40cf452d5a69ca74b3a749cb6eb332ed6fb5e36\"" Jan 26 18:17:45.335751 containerd[1597]: time="2026-01-26T18:17:45.335687359Z" level=info msg="CreateContainer within sandbox \"9a69fe2782d9ebc778a4c3e2168138cf7e1dbdf7a1c973c4218fcf121f13e647\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6eceb2b7dfe0c5a09fce6cb2820ff3e1cf5d2bc43728f8ad1d422a1d8a16a772\"" Jan 26 18:17:45.336578 containerd[1597]: time="2026-01-26T18:17:45.336522543Z" level=info msg="StartContainer for \"6eceb2b7dfe0c5a09fce6cb2820ff3e1cf5d2bc43728f8ad1d422a1d8a16a772\"" Jan 26 18:17:45.337271 kubelet[2443]: E0126 18:17:45.337245 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:17:45.338182 containerd[1597]: time="2026-01-26T18:17:45.338148172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"a093f40fd15f3fd92101cf72e03615e3e7e067313550c6f05b13634eff5c0683\"" Jan 26 18:17:45.338810 kubelet[2443]: E0126 18:17:45.338759 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:17:45.339240 containerd[1597]: time="2026-01-26T18:17:45.339166881Z" level=info msg="connecting to shim 6eceb2b7dfe0c5a09fce6cb2820ff3e1cf5d2bc43728f8ad1d422a1d8a16a772" address="unix:///run/containerd/s/c49f810387fadc9d06f427526b2e24e96820111294552a9de2ceb7109c03f1ab" protocol=ttrpc version=3 Jan 26 18:17:45.340784 containerd[1597]: time="2026-01-26T18:17:45.340617039Z" level=info msg="CreateContainer within sandbox \"ac7bc209868e4a244a5edd5aa40cf452d5a69ca74b3a749cb6eb332ed6fb5e36\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 26 18:17:45.342502 containerd[1597]: time="2026-01-26T18:17:45.342435507Z" level=info msg="CreateContainer within sandbox \"a093f40fd15f3fd92101cf72e03615e3e7e067313550c6f05b13634eff5c0683\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 26 18:17:45.355510 containerd[1597]: time="2026-01-26T18:17:45.355444839Z" level=info msg="Container 7029b3a4ea7ece28284f8d581636d919ead9835d40a984daabf58c0883d44aca: CDI devices from CRI Config.CDIDevices: []" Jan 26 18:17:45.364380 containerd[1597]: time="2026-01-26T18:17:45.364274591Z" level=info msg="Container eee3ea2f15fb36ce4d0987fb7ab595c3023f126d6bf70769c90bd1d7bb1fe268: CDI devices from CRI Config.CDIDevices: []" Jan 26 18:17:45.367803 containerd[1597]: time="2026-01-26T18:17:45.367652511Z" level=info msg="CreateContainer within sandbox \"a093f40fd15f3fd92101cf72e03615e3e7e067313550c6f05b13634eff5c0683\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7029b3a4ea7ece28284f8d581636d919ead9835d40a984daabf58c0883d44aca\"" Jan 26 18:17:45.369588 containerd[1597]: time="2026-01-26T18:17:45.369541199Z" level=info msg="StartContainer for \"7029b3a4ea7ece28284f8d581636d919ead9835d40a984daabf58c0883d44aca\"" Jan 26 18:17:45.373882 containerd[1597]: time="2026-01-26T18:17:45.373779355Z" level=info msg="CreateContainer within sandbox \"ac7bc209868e4a244a5edd5aa40cf452d5a69ca74b3a749cb6eb332ed6fb5e36\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"eee3ea2f15fb36ce4d0987fb7ab595c3023f126d6bf70769c90bd1d7bb1fe268\"" Jan 26 18:17:45.373936 containerd[1597]: time="2026-01-26T18:17:45.373887324Z" level=info msg="connecting to shim 7029b3a4ea7ece28284f8d581636d919ead9835d40a984daabf58c0883d44aca" address="unix:///run/containerd/s/1757625a20f19c28b53e6fd87fde4e5b6903d3a59a4bf2f2dc69f58aed00888a" protocol=ttrpc version=3 Jan 26 18:17:45.376885 containerd[1597]: time="2026-01-26T18:17:45.376221726Z" level=info msg="StartContainer for \"eee3ea2f15fb36ce4d0987fb7ab595c3023f126d6bf70769c90bd1d7bb1fe268\"" Jan 26 18:17:45.378641 containerd[1597]: time="2026-01-26T18:17:45.378615630Z" level=info msg="connecting to shim eee3ea2f15fb36ce4d0987fb7ab595c3023f126d6bf70769c90bd1d7bb1fe268" address="unix:///run/containerd/s/b5a4cf606a9e72e8103f2c672bf4b1f01eceee15085e8953be89fa4e2b8ad52f" protocol=ttrpc version=3 Jan 26 18:17:45.386193 systemd[1]: Started cri-containerd-6eceb2b7dfe0c5a09fce6cb2820ff3e1cf5d2bc43728f8ad1d422a1d8a16a772.scope - libcontainer container 6eceb2b7dfe0c5a09fce6cb2820ff3e1cf5d2bc43728f8ad1d422a1d8a16a772. Jan 26 18:17:45.424128 systemd[1]: Started cri-containerd-7029b3a4ea7ece28284f8d581636d919ead9835d40a984daabf58c0883d44aca.scope - libcontainer container 7029b3a4ea7ece28284f8d581636d919ead9835d40a984daabf58c0883d44aca. Jan 26 18:17:45.427000 audit: BPF prog-id=98 op=LOAD Jan 26 18:17:45.429000 audit: BPF prog-id=99 op=LOAD Jan 26 18:17:45.429000 audit[2631]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2510 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:45.429000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665636562326237646665306335613039666365366362323832306666 Jan 26 18:17:45.430000 audit: BPF prog-id=99 op=UNLOAD Jan 26 18:17:45.430000 audit[2631]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2510 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:45.430000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665636562326237646665306335613039666365366362323832306666 Jan 26 18:17:45.430000 audit: BPF prog-id=100 op=LOAD Jan 26 18:17:45.430000 audit[2631]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2510 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:45.430000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665636562326237646665306335613039666365366362323832306666 Jan 26 18:17:45.431000 audit: BPF prog-id=101 op=LOAD Jan 26 18:17:45.431000 audit[2631]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2510 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:45.431000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665636562326237646665306335613039666365366362323832306666 Jan 26 18:17:45.431000 audit: BPF prog-id=101 op=UNLOAD Jan 26 18:17:45.431000 audit[2631]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2510 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:45.431000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665636562326237646665306335613039666365366362323832306666 Jan 26 18:17:45.431000 audit: BPF prog-id=100 op=UNLOAD Jan 26 18:17:45.431000 audit[2631]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2510 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:45.431000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665636562326237646665306335613039666365366362323832306666 Jan 26 18:17:45.432000 audit: BPF prog-id=102 op=LOAD Jan 26 18:17:45.432000 audit[2631]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2510 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:45.432000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665636562326237646665306335613039666365366362323832306666 Jan 26 18:17:45.435598 systemd[1]: Started cri-containerd-eee3ea2f15fb36ce4d0987fb7ab595c3023f126d6bf70769c90bd1d7bb1fe268.scope - libcontainer container eee3ea2f15fb36ce4d0987fb7ab595c3023f126d6bf70769c90bd1d7bb1fe268. Jan 26 18:17:45.438606 kubelet[2443]: W0126 18:17:45.438493 2443 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused Jan 26 18:17:45.439811 kubelet[2443]: E0126 18:17:45.438650 2443 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Jan 26 18:17:45.447000 audit: BPF prog-id=103 op=LOAD Jan 26 18:17:45.448000 audit: BPF prog-id=104 op=LOAD Jan 26 18:17:45.448000 audit[2646]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=2531 pid=2646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:45.448000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730323962336134656137656365323832383466386435383136333664 Jan 26 18:17:45.448000 audit: BPF prog-id=104 op=UNLOAD Jan 26 18:17:45.448000 audit[2646]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2531 pid=2646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:45.448000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730323962336134656137656365323832383466386435383136333664 Jan 26 18:17:45.448000 audit: BPF prog-id=105 op=LOAD Jan 26 18:17:45.448000 audit[2646]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=2531 pid=2646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:45.448000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730323962336134656137656365323832383466386435383136333664 Jan 26 18:17:45.448000 audit: BPF prog-id=106 op=LOAD Jan 26 18:17:45.448000 audit[2646]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=2531 pid=2646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:45.448000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730323962336134656137656365323832383466386435383136333664 Jan 26 18:17:45.448000 audit: BPF prog-id=106 op=UNLOAD Jan 26 18:17:45.448000 audit[2646]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2531 pid=2646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:45.448000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730323962336134656137656365323832383466386435383136333664 Jan 26 18:17:45.448000 audit: BPF prog-id=105 op=UNLOAD Jan 26 18:17:45.448000 audit[2646]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2531 pid=2646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:45.448000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730323962336134656137656365323832383466386435383136333664 Jan 26 18:17:45.448000 audit: BPF prog-id=107 op=LOAD Jan 26 18:17:45.448000 audit[2646]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=2531 pid=2646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:45.448000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730323962336134656137656365323832383466386435383136333664 Jan 26 18:17:45.477000 audit: BPF prog-id=108 op=LOAD Jan 26 18:17:45.478000 audit: BPF prog-id=109 op=LOAD Jan 26 18:17:45.478000 audit[2647]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220238 a2=98 a3=0 items=0 ppid=2527 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:45.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565653365613266313566623336636534643039383766623761623539 Jan 26 18:17:45.478000 audit: BPF prog-id=109 op=UNLOAD Jan 26 18:17:45.478000 audit[2647]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2527 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:45.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565653365613266313566623336636534643039383766623761623539 Jan 26 18:17:45.479000 audit: BPF prog-id=110 op=LOAD Jan 26 18:17:45.479000 audit[2647]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220488 a2=98 a3=0 items=0 ppid=2527 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:45.479000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565653365613266313566623336636534643039383766623761623539 Jan 26 18:17:45.479000 audit: BPF prog-id=111 op=LOAD Jan 26 18:17:45.479000 audit[2647]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000220218 a2=98 a3=0 items=0 ppid=2527 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:45.479000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565653365613266313566623336636534643039383766623761623539 Jan 26 18:17:45.479000 audit: BPF prog-id=111 op=UNLOAD Jan 26 18:17:45.479000 audit[2647]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2527 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:45.479000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565653365613266313566623336636534643039383766623761623539 Jan 26 18:17:45.480000 audit: BPF prog-id=110 op=UNLOAD Jan 26 18:17:45.480000 audit[2647]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2527 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:45.480000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565653365613266313566623336636534643039383766623761623539 Jan 26 18:17:45.480000 audit: BPF prog-id=112 op=LOAD Jan 26 18:17:45.480000 audit[2647]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002206e8 a2=98 a3=0 items=0 ppid=2527 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:45.480000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565653365613266313566623336636534643039383766623761623539 Jan 26 18:17:45.494737 kubelet[2443]: E0126 18:17:45.494563 2443 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.64:6443: connect: connection refused" interval="1.6s" Jan 26 18:17:45.544283 containerd[1597]: time="2026-01-26T18:17:45.544216418Z" level=info msg="StartContainer for \"7029b3a4ea7ece28284f8d581636d919ead9835d40a984daabf58c0883d44aca\" returns successfully" Jan 26 18:17:45.552576 containerd[1597]: time="2026-01-26T18:17:45.552493066Z" level=info msg="StartContainer for \"6eceb2b7dfe0c5a09fce6cb2820ff3e1cf5d2bc43728f8ad1d422a1d8a16a772\" returns successfully" Jan 26 18:17:45.586052 containerd[1597]: time="2026-01-26T18:17:45.585972618Z" level=info msg="StartContainer for \"eee3ea2f15fb36ce4d0987fb7ab595c3023f126d6bf70769c90bd1d7bb1fe268\" returns successfully" Jan 26 18:17:45.865336 kubelet[2443]: I0126 18:17:45.865315 2443 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 26 18:17:45.866533 kubelet[2443]: E0126 18:17:45.866505 2443 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.64:6443/api/v1/nodes\": dial tcp 10.0.0.64:6443: connect: connection refused" node="localhost" Jan 26 18:17:46.186673 kubelet[2443]: E0126 18:17:46.186560 2443 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 26 18:17:46.187114 kubelet[2443]: E0126 18:17:46.187098 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:17:46.191344 kubelet[2443]: E0126 18:17:46.191241 2443 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 26 18:17:46.191565 kubelet[2443]: E0126 18:17:46.191491 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:17:46.194975 kubelet[2443]: E0126 18:17:46.194689 2443 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 26 18:17:46.194975 kubelet[2443]: E0126 18:17:46.194884 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:17:47.196951 kubelet[2443]: E0126 18:17:47.196812 2443 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 26 18:17:47.198176 kubelet[2443]: E0126 18:17:47.197225 2443 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 26 18:17:47.198176 kubelet[2443]: E0126 18:17:47.198044 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:17:47.198176 kubelet[2443]: E0126 18:17:47.198058 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:17:47.596514 kubelet[2443]: I0126 18:17:47.596435 2443 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 26 18:17:48.565880 kubelet[2443]: E0126 18:17:48.565532 2443 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 26 18:17:48.565880 kubelet[2443]: E0126 18:17:48.565891 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:17:49.352197 kubelet[2443]: E0126 18:17:49.352048 2443 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 26 18:17:49.420795 kubelet[2443]: I0126 18:17:49.420608 2443 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 26 18:17:49.495661 kubelet[2443]: I0126 18:17:49.495502 2443 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 26 18:17:49.576198 kubelet[2443]: E0126 18:17:49.576044 2443 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 26 18:17:49.576198 kubelet[2443]: I0126 18:17:49.576118 2443 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 26 18:17:49.580895 kubelet[2443]: E0126 18:17:49.580324 2443 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 26 18:17:49.580895 kubelet[2443]: I0126 18:17:49.580345 2443 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 26 18:17:49.584249 kubelet[2443]: E0126 18:17:49.583999 2443 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 26 18:17:50.092589 kubelet[2443]: I0126 18:17:50.092430 2443 apiserver.go:52] "Watching apiserver" Jan 26 18:17:50.191361 kubelet[2443]: I0126 18:17:50.191100 2443 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 26 18:17:50.457386 kubelet[2443]: I0126 18:17:50.457039 2443 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 26 18:17:50.545248 kubelet[2443]: E0126 18:17:50.545172 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:17:51.232953 kubelet[2443]: E0126 18:17:51.232762 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:17:52.333525 systemd[1]: Reload requested from client PID 2735 ('systemctl') (unit session-10.scope)... Jan 26 18:17:52.333571 systemd[1]: Reloading... Jan 26 18:17:52.421947 zram_generator::config[2781]: No configuration found. Jan 26 18:17:52.691118 systemd[1]: Reloading finished in 356 ms. Jan 26 18:17:52.741390 kubelet[2443]: I0126 18:17:52.741249 2443 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 26 18:17:52.741435 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 26 18:17:52.761116 systemd[1]: kubelet.service: Deactivated successfully. Jan 26 18:17:52.761536 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 26 18:17:52.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:17:52.761660 systemd[1]: kubelet.service: Consumed 2.690s CPU time, 132.9M memory peak. Jan 26 18:17:52.764360 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 26 18:17:52.764430 kernel: audit: type=1131 audit(1769451472.760:391): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:17:52.765060 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 26 18:17:52.774106 kernel: audit: type=1334 audit(1769451472.764:392): prog-id=113 op=LOAD Jan 26 18:17:52.764000 audit: BPF prog-id=113 op=LOAD Jan 26 18:17:52.764000 audit: BPF prog-id=70 op=UNLOAD Jan 26 18:17:52.780687 kernel: audit: type=1334 audit(1769451472.764:393): prog-id=70 op=UNLOAD Jan 26 18:17:52.780800 kernel: audit: type=1334 audit(1769451472.764:394): prog-id=114 op=LOAD Jan 26 18:17:52.764000 audit: BPF prog-id=114 op=LOAD Jan 26 18:17:52.784009 kernel: audit: type=1334 audit(1769451472.764:395): prog-id=115 op=LOAD Jan 26 18:17:52.764000 audit: BPF prog-id=115 op=LOAD Jan 26 18:17:52.787023 kernel: audit: type=1334 audit(1769451472.764:396): prog-id=71 op=UNLOAD Jan 26 18:17:52.764000 audit: BPF prog-id=71 op=UNLOAD Jan 26 18:17:52.764000 audit: BPF prog-id=72 op=UNLOAD Jan 26 18:17:52.798065 kernel: audit: type=1334 audit(1769451472.764:397): prog-id=72 op=UNLOAD Jan 26 18:17:52.798109 kernel: audit: type=1334 audit(1769451472.764:398): prog-id=116 op=LOAD Jan 26 18:17:52.798133 kernel: audit: type=1334 audit(1769451472.764:399): prog-id=77 op=UNLOAD Jan 26 18:17:52.764000 audit: BPF prog-id=116 op=LOAD Jan 26 18:17:52.764000 audit: BPF prog-id=77 op=UNLOAD Jan 26 18:17:52.764000 audit: BPF prog-id=117 op=LOAD Jan 26 18:17:52.803617 kernel: audit: type=1334 audit(1769451472.764:400): prog-id=117 op=LOAD Jan 26 18:17:52.764000 audit: BPF prog-id=118 op=LOAD Jan 26 18:17:52.764000 audit: BPF prog-id=78 op=UNLOAD Jan 26 18:17:52.764000 audit: BPF prog-id=79 op=UNLOAD Jan 26 18:17:52.769000 audit: BPF prog-id=119 op=LOAD Jan 26 18:17:52.769000 audit: BPF prog-id=63 op=UNLOAD Jan 26 18:17:52.769000 audit: BPF prog-id=120 op=LOAD Jan 26 18:17:52.769000 audit: BPF prog-id=121 op=LOAD Jan 26 18:17:52.769000 audit: BPF prog-id=64 op=UNLOAD Jan 26 18:17:52.769000 audit: BPF prog-id=65 op=UNLOAD Jan 26 18:17:52.769000 audit: BPF prog-id=122 op=LOAD Jan 26 18:17:52.769000 audit: BPF prog-id=123 op=LOAD Jan 26 18:17:52.769000 audit: BPF prog-id=81 op=UNLOAD Jan 26 18:17:52.769000 audit: BPF prog-id=82 op=UNLOAD Jan 26 18:17:52.769000 audit: BPF prog-id=124 op=LOAD Jan 26 18:17:52.769000 audit: BPF prog-id=73 op=UNLOAD Jan 26 18:17:52.769000 audit: BPF prog-id=125 op=LOAD Jan 26 18:17:52.769000 audit: BPF prog-id=126 op=LOAD Jan 26 18:17:52.769000 audit: BPF prog-id=74 op=UNLOAD Jan 26 18:17:52.769000 audit: BPF prog-id=75 op=UNLOAD Jan 26 18:17:52.773000 audit: BPF prog-id=127 op=LOAD Jan 26 18:17:52.773000 audit: BPF prog-id=66 op=UNLOAD Jan 26 18:17:52.793000 audit: BPF prog-id=128 op=LOAD Jan 26 18:17:52.793000 audit: BPF prog-id=67 op=UNLOAD Jan 26 18:17:52.793000 audit: BPF prog-id=129 op=LOAD Jan 26 18:17:52.793000 audit: BPF prog-id=130 op=LOAD Jan 26 18:17:52.793000 audit: BPF prog-id=68 op=UNLOAD Jan 26 18:17:52.793000 audit: BPF prog-id=69 op=UNLOAD Jan 26 18:17:52.797000 audit: BPF prog-id=131 op=LOAD Jan 26 18:17:52.797000 audit: BPF prog-id=80 op=UNLOAD Jan 26 18:17:52.798000 audit: BPF prog-id=132 op=LOAD Jan 26 18:17:52.798000 audit: BPF prog-id=76 op=UNLOAD Jan 26 18:17:53.048099 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 26 18:17:53.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:17:53.062468 (kubelet)[2826]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 26 18:17:53.131072 kubelet[2826]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 18:17:53.131072 kubelet[2826]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 26 18:17:53.131072 kubelet[2826]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 18:17:53.131803 kubelet[2826]: I0126 18:17:53.131684 2826 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 26 18:17:53.142872 kubelet[2826]: I0126 18:17:53.142783 2826 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 26 18:17:53.143121 kubelet[2826]: I0126 18:17:53.142899 2826 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 26 18:17:53.143121 kubelet[2826]: I0126 18:17:53.143071 2826 server.go:954] "Client rotation is on, will bootstrap in background" Jan 26 18:17:53.146099 kubelet[2826]: I0126 18:17:53.145958 2826 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 26 18:17:53.149848 kubelet[2826]: I0126 18:17:53.149677 2826 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 26 18:17:53.168600 kubelet[2826]: I0126 18:17:53.168548 2826 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 26 18:17:53.176068 kubelet[2826]: I0126 18:17:53.176029 2826 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 26 18:17:53.176306 kubelet[2826]: I0126 18:17:53.176262 2826 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 26 18:17:53.176473 kubelet[2826]: I0126 18:17:53.176291 2826 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 26 18:17:53.176765 kubelet[2826]: I0126 18:17:53.176497 2826 topology_manager.go:138] "Creating topology manager with none policy" Jan 26 18:17:53.176765 kubelet[2826]: I0126 18:17:53.176509 2826 container_manager_linux.go:304] "Creating device plugin manager" Jan 26 18:17:53.176765 kubelet[2826]: I0126 18:17:53.176552 2826 state_mem.go:36] "Initialized new in-memory state store" Jan 26 18:17:53.176937 kubelet[2826]: I0126 18:17:53.176815 2826 kubelet.go:446] "Attempting to sync node with API server" Jan 26 18:17:53.176991 kubelet[2826]: I0126 18:17:53.176976 2826 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 26 18:17:53.177019 kubelet[2826]: I0126 18:17:53.176996 2826 kubelet.go:352] "Adding apiserver pod source" Jan 26 18:17:53.177019 kubelet[2826]: I0126 18:17:53.177006 2826 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 26 18:17:53.180091 kubelet[2826]: I0126 18:17:53.180037 2826 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 26 18:17:53.180791 kubelet[2826]: I0126 18:17:53.180745 2826 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 26 18:17:53.183979 kubelet[2826]: I0126 18:17:53.183037 2826 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 26 18:17:53.183979 kubelet[2826]: I0126 18:17:53.183067 2826 server.go:1287] "Started kubelet" Jan 26 18:17:53.191321 kubelet[2826]: I0126 18:17:53.191261 2826 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 26 18:17:53.195908 kubelet[2826]: E0126 18:17:53.195088 2826 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 26 18:17:53.198207 kubelet[2826]: I0126 18:17:53.198026 2826 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 26 18:17:53.200289 kubelet[2826]: I0126 18:17:53.200172 2826 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 26 18:17:53.201796 kubelet[2826]: I0126 18:17:53.201662 2826 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 26 18:17:53.202561 kubelet[2826]: I0126 18:17:53.202465 2826 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 26 18:17:53.203601 kubelet[2826]: I0126 18:17:53.202483 2826 reconciler.go:26] "Reconciler: start to sync state" Jan 26 18:17:53.213437 kubelet[2826]: I0126 18:17:53.212987 2826 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 26 18:17:53.217986 kubelet[2826]: I0126 18:17:53.217751 2826 factory.go:221] Registration of the containerd container factory successfully Jan 26 18:17:53.217986 kubelet[2826]: I0126 18:17:53.217800 2826 factory.go:221] Registration of the systemd container factory successfully Jan 26 18:17:53.217986 kubelet[2826]: I0126 18:17:53.217944 2826 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 26 18:17:53.218566 kubelet[2826]: I0126 18:17:53.218499 2826 server.go:479] "Adding debug handlers to kubelet server" Jan 26 18:17:53.219387 kubelet[2826]: I0126 18:17:53.219370 2826 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 26 18:17:53.226652 kubelet[2826]: I0126 18:17:53.226581 2826 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 26 18:17:53.229240 kubelet[2826]: I0126 18:17:53.229210 2826 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 26 18:17:53.229337 kubelet[2826]: I0126 18:17:53.229321 2826 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 26 18:17:53.229363 kubelet[2826]: I0126 18:17:53.229343 2826 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 26 18:17:53.229363 kubelet[2826]: I0126 18:17:53.229351 2826 kubelet.go:2382] "Starting kubelet main sync loop" Jan 26 18:17:53.229507 kubelet[2826]: E0126 18:17:53.229457 2826 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 26 18:17:53.283080 kubelet[2826]: I0126 18:17:53.282995 2826 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 26 18:17:53.283080 kubelet[2826]: I0126 18:17:53.283037 2826 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 26 18:17:53.283080 kubelet[2826]: I0126 18:17:53.283054 2826 state_mem.go:36] "Initialized new in-memory state store" Jan 26 18:17:53.283238 kubelet[2826]: I0126 18:17:53.283203 2826 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 26 18:17:53.283238 kubelet[2826]: I0126 18:17:53.283214 2826 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 26 18:17:53.283238 kubelet[2826]: I0126 18:17:53.283232 2826 policy_none.go:49] "None policy: Start" Jan 26 18:17:53.283238 kubelet[2826]: I0126 18:17:53.283241 2826 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 26 18:17:53.283379 kubelet[2826]: I0126 18:17:53.283252 2826 state_mem.go:35] "Initializing new in-memory state store" Jan 26 18:17:53.283379 kubelet[2826]: I0126 18:17:53.283347 2826 state_mem.go:75] "Updated machine memory state" Jan 26 18:17:53.291525 kubelet[2826]: I0126 18:17:53.291406 2826 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 26 18:17:53.292155 kubelet[2826]: I0126 18:17:53.292068 2826 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 26 18:17:53.292194 kubelet[2826]: I0126 18:17:53.292141 2826 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 26 18:17:53.292548 kubelet[2826]: I0126 18:17:53.292473 2826 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 26 18:17:53.298626 kubelet[2826]: E0126 18:17:53.298384 2826 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 26 18:17:53.330693 kubelet[2826]: I0126 18:17:53.330073 2826 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 26 18:17:53.331309 kubelet[2826]: I0126 18:17:53.331205 2826 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 26 18:17:53.331309 kubelet[2826]: I0126 18:17:53.331249 2826 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 26 18:17:53.348914 kubelet[2826]: E0126 18:17:53.348249 2826 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 26 18:17:53.404657 kubelet[2826]: I0126 18:17:53.404556 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 26 18:17:53.404657 kubelet[2826]: I0126 18:17:53.404631 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 26 18:17:53.404910 kubelet[2826]: I0126 18:17:53.404666 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 26 18:17:53.404910 kubelet[2826]: I0126 18:17:53.404694 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e21cd28fce555573da7e6541dec4d111-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e21cd28fce555573da7e6541dec4d111\") " pod="kube-system/kube-apiserver-localhost" Jan 26 18:17:53.404910 kubelet[2826]: I0126 18:17:53.404785 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 26 18:17:53.404910 kubelet[2826]: I0126 18:17:53.404811 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 26 18:17:53.405035 kubelet[2826]: I0126 18:17:53.404929 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 26 18:17:53.405035 kubelet[2826]: I0126 18:17:53.404944 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e21cd28fce555573da7e6541dec4d111-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e21cd28fce555573da7e6541dec4d111\") " pod="kube-system/kube-apiserver-localhost" Jan 26 18:17:53.405035 kubelet[2826]: I0126 18:17:53.404959 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e21cd28fce555573da7e6541dec4d111-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e21cd28fce555573da7e6541dec4d111\") " pod="kube-system/kube-apiserver-localhost" Jan 26 18:17:53.409945 kubelet[2826]: I0126 18:17:53.408969 2826 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 26 18:17:53.425383 kubelet[2826]: I0126 18:17:53.425316 2826 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 26 18:17:53.425654 kubelet[2826]: I0126 18:17:53.425536 2826 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 26 18:17:53.650434 kubelet[2826]: E0126 18:17:53.650125 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:17:53.651930 kubelet[2826]: E0126 18:17:53.650942 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:17:53.651930 kubelet[2826]: E0126 18:17:53.651342 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:17:54.177503 kubelet[2826]: I0126 18:17:54.177390 2826 apiserver.go:52] "Watching apiserver" Jan 26 18:17:54.204589 kubelet[2826]: I0126 18:17:54.203350 2826 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 26 18:17:54.268629 kubelet[2826]: I0126 18:17:54.268502 2826 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 26 18:17:54.269201 kubelet[2826]: I0126 18:17:54.269119 2826 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 26 18:17:54.269467 kubelet[2826]: E0126 18:17:54.269448 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:17:54.283810 kubelet[2826]: E0126 18:17:54.283382 2826 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 26 18:17:54.283810 kubelet[2826]: E0126 18:17:54.283523 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:17:54.286489 kubelet[2826]: E0126 18:17:54.286420 2826 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 26 18:17:54.286806 kubelet[2826]: E0126 18:17:54.286693 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:17:54.340912 kubelet[2826]: I0126 18:17:54.340214 2826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.340198773 podStartE2EDuration="1.340198773s" podCreationTimestamp="2026-01-26 18:17:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:17:54.326114353 +0000 UTC m=+1.256975055" watchObservedRunningTime="2026-01-26 18:17:54.340198773 +0000 UTC m=+1.271059476" Jan 26 18:17:54.341251 kubelet[2826]: I0126 18:17:54.340781 2826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.340766952 podStartE2EDuration="4.340766952s" podCreationTimestamp="2026-01-26 18:17:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:17:54.339057617 +0000 UTC m=+1.269918319" watchObservedRunningTime="2026-01-26 18:17:54.340766952 +0000 UTC m=+1.271627654" Jan 26 18:17:54.352914 kubelet[2826]: I0126 18:17:54.352782 2826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.352767613 podStartE2EDuration="1.352767613s" podCreationTimestamp="2026-01-26 18:17:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:17:54.35249215 +0000 UTC m=+1.283352852" watchObservedRunningTime="2026-01-26 18:17:54.352767613 +0000 UTC m=+1.283628315" Jan 26 18:17:55.272428 kubelet[2826]: E0126 18:17:55.272352 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:17:55.273187 kubelet[2826]: E0126 18:17:55.273150 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:17:56.273325 kubelet[2826]: E0126 18:17:56.273209 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:17:57.274437 kubelet[2826]: E0126 18:17:57.274302 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:17:58.145163 kubelet[2826]: I0126 18:17:58.145102 2826 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 26 18:17:58.146401 containerd[1597]: time="2026-01-26T18:17:58.146352714Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 26 18:17:58.147388 kubelet[2826]: I0126 18:17:58.147205 2826 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 26 18:17:58.821254 systemd[1]: Created slice kubepods-besteffort-pod7cfdcffe_fe7e_48c1_a363_56151f10a759.slice - libcontainer container kubepods-besteffort-pod7cfdcffe_fe7e_48c1_a363_56151f10a759.slice. Jan 26 18:17:58.953445 kubelet[2826]: I0126 18:17:58.953239 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhjhr\" (UniqueName: \"kubernetes.io/projected/7cfdcffe-fe7e-48c1-a363-56151f10a759-kube-api-access-nhjhr\") pod \"kube-proxy-577hr\" (UID: \"7cfdcffe-fe7e-48c1-a363-56151f10a759\") " pod="kube-system/kube-proxy-577hr" Jan 26 18:17:58.953445 kubelet[2826]: I0126 18:17:58.953306 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7cfdcffe-fe7e-48c1-a363-56151f10a759-kube-proxy\") pod \"kube-proxy-577hr\" (UID: \"7cfdcffe-fe7e-48c1-a363-56151f10a759\") " pod="kube-system/kube-proxy-577hr" Jan 26 18:17:58.953445 kubelet[2826]: I0126 18:17:58.953326 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7cfdcffe-fe7e-48c1-a363-56151f10a759-xtables-lock\") pod \"kube-proxy-577hr\" (UID: \"7cfdcffe-fe7e-48c1-a363-56151f10a759\") " pod="kube-system/kube-proxy-577hr" Jan 26 18:17:58.953445 kubelet[2826]: I0126 18:17:58.953339 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7cfdcffe-fe7e-48c1-a363-56151f10a759-lib-modules\") pod \"kube-proxy-577hr\" (UID: \"7cfdcffe-fe7e-48c1-a363-56151f10a759\") " pod="kube-system/kube-proxy-577hr" Jan 26 18:17:59.135119 kubelet[2826]: E0126 18:17:59.134617 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:17:59.135935 containerd[1597]: time="2026-01-26T18:17:59.135780758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-577hr,Uid:7cfdcffe-fe7e-48c1-a363-56151f10a759,Namespace:kube-system,Attempt:0,}" Jan 26 18:17:59.200881 containerd[1597]: time="2026-01-26T18:17:59.200183855Z" level=info msg="connecting to shim 29c3da8edc56c442b57aeabfa4b0ddc8df0867f6b060c24a1029b1a39903d08f" address="unix:///run/containerd/s/c75d3f3a830de1e88b610823d7fe190ec80084116ac07190b97e91c90d1a7c15" namespace=k8s.io protocol=ttrpc version=3 Jan 26 18:17:59.300109 systemd[1]: Started cri-containerd-29c3da8edc56c442b57aeabfa4b0ddc8df0867f6b060c24a1029b1a39903d08f.scope - libcontainer container 29c3da8edc56c442b57aeabfa4b0ddc8df0867f6b060c24a1029b1a39903d08f. Jan 26 18:17:59.311343 systemd[1]: Created slice kubepods-besteffort-pod5209e3c8_5cfe_4958_bf60_cd24284123ac.slice - libcontainer container kubepods-besteffort-pod5209e3c8_5cfe_4958_bf60_cd24284123ac.slice. Jan 26 18:17:59.317000 audit: BPF prog-id=133 op=LOAD Jan 26 18:17:59.321701 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 26 18:17:59.321787 kernel: audit: type=1334 audit(1769451479.317:433): prog-id=133 op=LOAD Jan 26 18:17:59.320000 audit: BPF prog-id=134 op=LOAD Jan 26 18:17:59.328574 kernel: audit: type=1334 audit(1769451479.320:434): prog-id=134 op=LOAD Jan 26 18:17:59.320000 audit[2899]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2887 pid=2899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:59.340652 kernel: audit: type=1300 audit(1769451479.320:434): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2887 pid=2899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:59.340682 kernel: audit: type=1327 audit(1769451479.320:434): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239633364613865646335366334343262353761656162666134623064 Jan 26 18:17:59.320000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239633364613865646335366334343262353761656162666134623064 Jan 26 18:17:59.320000 audit: BPF prog-id=134 op=UNLOAD Jan 26 18:17:59.355917 kernel: audit: type=1334 audit(1769451479.320:435): prog-id=134 op=UNLOAD Jan 26 18:17:59.355993 kernel: audit: type=1300 audit(1769451479.320:435): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2887 pid=2899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:59.320000 audit[2899]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2887 pid=2899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:59.356511 kubelet[2826]: I0126 18:17:59.356476 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pshmb\" (UniqueName: \"kubernetes.io/projected/5209e3c8-5cfe-4958-bf60-cd24284123ac-kube-api-access-pshmb\") pod \"tigera-operator-7dcd859c48-dxbmj\" (UID: \"5209e3c8-5cfe-4958-bf60-cd24284123ac\") " pod="tigera-operator/tigera-operator-7dcd859c48-dxbmj" Jan 26 18:17:59.356964 kubelet[2826]: I0126 18:17:59.356747 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5209e3c8-5cfe-4958-bf60-cd24284123ac-var-lib-calico\") pod \"tigera-operator-7dcd859c48-dxbmj\" (UID: \"5209e3c8-5cfe-4958-bf60-cd24284123ac\") " pod="tigera-operator/tigera-operator-7dcd859c48-dxbmj" Jan 26 18:17:59.367907 kernel: audit: type=1327 audit(1769451479.320:435): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239633364613865646335366334343262353761656162666134623064 Jan 26 18:17:59.320000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239633364613865646335366334343262353761656162666134623064 Jan 26 18:17:59.320000 audit: BPF prog-id=135 op=LOAD Jan 26 18:17:59.382484 kernel: audit: type=1334 audit(1769451479.320:436): prog-id=135 op=LOAD Jan 26 18:17:59.382525 kernel: audit: type=1300 audit(1769451479.320:436): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2887 pid=2899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:59.320000 audit[2899]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2887 pid=2899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:59.383911 containerd[1597]: time="2026-01-26T18:17:59.383866926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-577hr,Uid:7cfdcffe-fe7e-48c1-a363-56151f10a759,Namespace:kube-system,Attempt:0,} returns sandbox id \"29c3da8edc56c442b57aeabfa4b0ddc8df0867f6b060c24a1029b1a39903d08f\"" Jan 26 18:17:59.385440 kubelet[2826]: E0126 18:17:59.385292 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:17:59.389470 containerd[1597]: time="2026-01-26T18:17:59.389272423Z" level=info msg="CreateContainer within sandbox \"29c3da8edc56c442b57aeabfa4b0ddc8df0867f6b060c24a1029b1a39903d08f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 26 18:17:59.320000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239633364613865646335366334343262353761656162666134623064 Jan 26 18:17:59.409387 kernel: audit: type=1327 audit(1769451479.320:436): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239633364613865646335366334343262353761656162666134623064 Jan 26 18:17:59.321000 audit: BPF prog-id=136 op=LOAD Jan 26 18:17:59.321000 audit[2899]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2887 pid=2899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:59.321000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239633364613865646335366334343262353761656162666134623064 Jan 26 18:17:59.321000 audit: BPF prog-id=136 op=UNLOAD Jan 26 18:17:59.321000 audit[2899]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2887 pid=2899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:59.321000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239633364613865646335366334343262353761656162666134623064 Jan 26 18:17:59.321000 audit: BPF prog-id=135 op=UNLOAD Jan 26 18:17:59.321000 audit[2899]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2887 pid=2899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:59.321000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239633364613865646335366334343262353761656162666134623064 Jan 26 18:17:59.321000 audit: BPF prog-id=137 op=LOAD Jan 26 18:17:59.321000 audit[2899]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2887 pid=2899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:59.321000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239633364613865646335366334343262353761656162666134623064 Jan 26 18:17:59.411870 containerd[1597]: time="2026-01-26T18:17:59.411110629Z" level=info msg="Container 43395e748e12ea787481ffe4d980bbacbe736a71a643c42426c5abf9c94175d2: CDI devices from CRI Config.CDIDevices: []" Jan 26 18:17:59.420429 containerd[1597]: time="2026-01-26T18:17:59.420406176Z" level=info msg="CreateContainer within sandbox \"29c3da8edc56c442b57aeabfa4b0ddc8df0867f6b060c24a1029b1a39903d08f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"43395e748e12ea787481ffe4d980bbacbe736a71a643c42426c5abf9c94175d2\"" Jan 26 18:17:59.421167 containerd[1597]: time="2026-01-26T18:17:59.421125260Z" level=info msg="StartContainer for \"43395e748e12ea787481ffe4d980bbacbe736a71a643c42426c5abf9c94175d2\"" Jan 26 18:17:59.422388 containerd[1597]: time="2026-01-26T18:17:59.422363599Z" level=info msg="connecting to shim 43395e748e12ea787481ffe4d980bbacbe736a71a643c42426c5abf9c94175d2" address="unix:///run/containerd/s/c75d3f3a830de1e88b610823d7fe190ec80084116ac07190b97e91c90d1a7c15" protocol=ttrpc version=3 Jan 26 18:17:59.454200 systemd[1]: Started cri-containerd-43395e748e12ea787481ffe4d980bbacbe736a71a643c42426c5abf9c94175d2.scope - libcontainer container 43395e748e12ea787481ffe4d980bbacbe736a71a643c42426c5abf9c94175d2. Jan 26 18:17:59.520000 audit: BPF prog-id=138 op=LOAD Jan 26 18:17:59.520000 audit[2925]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2887 pid=2925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:59.520000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433333935653734386531326561373837343831666665346439383062 Jan 26 18:17:59.520000 audit: BPF prog-id=139 op=LOAD Jan 26 18:17:59.520000 audit[2925]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2887 pid=2925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:59.520000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433333935653734386531326561373837343831666665346439383062 Jan 26 18:17:59.520000 audit: BPF prog-id=139 op=UNLOAD Jan 26 18:17:59.520000 audit[2925]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2887 pid=2925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:59.520000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433333935653734386531326561373837343831666665346439383062 Jan 26 18:17:59.520000 audit: BPF prog-id=138 op=UNLOAD Jan 26 18:17:59.520000 audit[2925]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2887 pid=2925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:59.520000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433333935653734386531326561373837343831666665346439383062 Jan 26 18:17:59.520000 audit: BPF prog-id=140 op=LOAD Jan 26 18:17:59.520000 audit[2925]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2887 pid=2925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:59.520000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433333935653734386531326561373837343831666665346439383062 Jan 26 18:17:59.550775 containerd[1597]: time="2026-01-26T18:17:59.550676804Z" level=info msg="StartContainer for \"43395e748e12ea787481ffe4d980bbacbe736a71a643c42426c5abf9c94175d2\" returns successfully" Jan 26 18:17:59.620378 containerd[1597]: time="2026-01-26T18:17:59.620235102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-dxbmj,Uid:5209e3c8-5cfe-4958-bf60-cd24284123ac,Namespace:tigera-operator,Attempt:0,}" Jan 26 18:17:59.651685 kubelet[2826]: E0126 18:17:59.650779 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:17:59.652043 containerd[1597]: time="2026-01-26T18:17:59.651658699Z" level=info msg="connecting to shim a1fa64bf8c5199fe417a457d0af517749996cfc232a1467d1ee379241ec06821" address="unix:///run/containerd/s/52b5e3c531d52b2e4cbbde30ea8d272abe123828f7d642454562cdfca29bf21c" namespace=k8s.io protocol=ttrpc version=3 Jan 26 18:17:59.717125 systemd[1]: Started cri-containerd-a1fa64bf8c5199fe417a457d0af517749996cfc232a1467d1ee379241ec06821.scope - libcontainer container a1fa64bf8c5199fe417a457d0af517749996cfc232a1467d1ee379241ec06821. Jan 26 18:17:59.734000 audit: BPF prog-id=141 op=LOAD Jan 26 18:17:59.734000 audit: BPF prog-id=142 op=LOAD Jan 26 18:17:59.734000 audit[2977]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=2964 pid=2977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:59.734000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131666136346266386335313939666534313761343537643061663531 Jan 26 18:17:59.734000 audit: BPF prog-id=142 op=UNLOAD Jan 26 18:17:59.734000 audit[2977]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2964 pid=2977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:59.734000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131666136346266386335313939666534313761343537643061663531 Jan 26 18:17:59.734000 audit: BPF prog-id=143 op=LOAD Jan 26 18:17:59.734000 audit[2977]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=2964 pid=2977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:59.734000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131666136346266386335313939666534313761343537643061663531 Jan 26 18:17:59.735000 audit: BPF prog-id=144 op=LOAD Jan 26 18:17:59.735000 audit[2977]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=2964 pid=2977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:59.735000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131666136346266386335313939666534313761343537643061663531 Jan 26 18:17:59.735000 audit: BPF prog-id=144 op=UNLOAD Jan 26 18:17:59.735000 audit[2977]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2964 pid=2977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:59.735000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131666136346266386335313939666534313761343537643061663531 Jan 26 18:17:59.736000 audit: BPF prog-id=143 op=UNLOAD Jan 26 18:17:59.736000 audit[2977]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2964 pid=2977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:59.736000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131666136346266386335313939666534313761343537643061663531 Jan 26 18:17:59.736000 audit: BPF prog-id=145 op=LOAD Jan 26 18:17:59.736000 audit[2977]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=2964 pid=2977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:59.736000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131666136346266386335313939666534313761343537643061663531 Jan 26 18:17:59.793445 containerd[1597]: time="2026-01-26T18:17:59.793301048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-dxbmj,Uid:5209e3c8-5cfe-4958-bf60-cd24284123ac,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a1fa64bf8c5199fe417a457d0af517749996cfc232a1467d1ee379241ec06821\"" Jan 26 18:17:59.799271 containerd[1597]: time="2026-01-26T18:17:59.799223884Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 26 18:17:59.852000 audit[3033]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3033 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:17:59.852000 audit[3033]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd8adb5a00 a2=0 a3=7ffd8adb59ec items=0 ppid=2938 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:59.852000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 26 18:17:59.855000 audit[3035]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=3035 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:17:59.857000 audit[3032]: NETFILTER_CFG table=mangle:56 family=10 entries=1 op=nft_register_chain pid=3032 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:17:59.857000 audit[3032]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd671cdc70 a2=0 a3=7ffd671cdc5c items=0 ppid=2938 pid=3032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:59.857000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 26 18:17:59.855000 audit[3035]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffec6b0a20 a2=0 a3=7fffec6b0a0c items=0 ppid=2938 pid=3035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:59.855000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 26 18:17:59.863000 audit[3038]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=3038 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:17:59.863000 audit[3038]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdafc030e0 a2=0 a3=7ffdafc030cc items=0 ppid=2938 pid=3038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:59.863000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 26 18:17:59.869000 audit[3039]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_chain pid=3039 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:17:59.869000 audit[3039]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcf2e08840 a2=0 a3=7ffcf2e0882c items=0 ppid=2938 pid=3039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:59.869000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 26 18:17:59.870000 audit[3040]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3040 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:17:59.870000 audit[3040]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd0f0f82f0 a2=0 a3=7ffd0f0f82dc items=0 ppid=2938 pid=3040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:59.870000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 26 18:17:59.960000 audit[3041]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3041 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:17:59.960000 audit[3041]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffde3b2e760 a2=0 a3=7ffde3b2e74c items=0 ppid=2938 pid=3041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:59.960000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 26 18:17:59.966000 audit[3043]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3043 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:17:59.966000 audit[3043]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff68bf1be0 a2=0 a3=7fff68bf1bcc items=0 ppid=2938 pid=3043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:59.966000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 26 18:17:59.973000 audit[3046]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3046 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:17:59.973000 audit[3046]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffcfd741f10 a2=0 a3=7ffcfd741efc items=0 ppid=2938 pid=3046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:59.973000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 26 18:17:59.976000 audit[3047]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3047 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:17:59.976000 audit[3047]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc065ecfa0 a2=0 a3=7ffc065ecf8c items=0 ppid=2938 pid=3047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:59.976000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 26 18:17:59.980000 audit[3049]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3049 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:17:59.980000 audit[3049]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffef0287b00 a2=0 a3=7ffef0287aec items=0 ppid=2938 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:59.980000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 26 18:17:59.983000 audit[3050]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3050 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:17:59.983000 audit[3050]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffddb458490 a2=0 a3=7ffddb45847c items=0 ppid=2938 pid=3050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:59.983000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 26 18:17:59.988000 audit[3052]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3052 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:17:59.988000 audit[3052]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcdc69bb90 a2=0 a3=7ffcdc69bb7c items=0 ppid=2938 pid=3052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:59.988000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 26 18:17:59.996000 audit[3055]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3055 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:17:59.996000 audit[3055]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcacc92da0 a2=0 a3=7ffcacc92d8c items=0 ppid=2938 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:59.996000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 26 18:17:59.998000 audit[3056]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3056 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:17:59.998000 audit[3056]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc9a3eb010 a2=0 a3=7ffc9a3eaffc items=0 ppid=2938 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:17:59.998000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 26 18:18:00.003000 audit[3058]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3058 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:18:00.003000 audit[3058]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffda14c8690 a2=0 a3=7ffda14c867c items=0 ppid=2938 pid=3058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:00.003000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 26 18:18:00.005000 audit[3059]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3059 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:18:00.005000 audit[3059]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcdeaf4040 a2=0 a3=7ffcdeaf402c items=0 ppid=2938 pid=3059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:00.005000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 26 18:18:00.011000 audit[3061]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3061 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:18:00.011000 audit[3061]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff03bf3890 a2=0 a3=7fff03bf387c items=0 ppid=2938 pid=3061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:00.011000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 26 18:18:00.019000 audit[3064]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3064 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:18:00.019000 audit[3064]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff045e76a0 a2=0 a3=7fff045e768c items=0 ppid=2938 pid=3064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:00.019000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 26 18:18:00.027000 audit[3067]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3067 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:18:00.027000 audit[3067]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdeab93c80 a2=0 a3=7ffdeab93c6c items=0 ppid=2938 pid=3067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:00.027000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 26 18:18:00.029000 audit[3068]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3068 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:18:00.029000 audit[3068]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff019702a0 a2=0 a3=7fff0197028c items=0 ppid=2938 pid=3068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:00.029000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 26 18:18:00.035000 audit[3070]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3070 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:18:00.035000 audit[3070]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd774b56b0 a2=0 a3=7ffd774b569c items=0 ppid=2938 pid=3070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:00.035000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 26 18:18:00.042000 audit[3073]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3073 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:18:00.042000 audit[3073]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc939d9a20 a2=0 a3=7ffc939d9a0c items=0 ppid=2938 pid=3073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:00.042000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 26 18:18:00.044000 audit[3074]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3074 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:18:00.044000 audit[3074]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd92891080 a2=0 a3=7ffd9289106c items=0 ppid=2938 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:00.044000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 26 18:18:00.049000 audit[3076]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3076 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 26 18:18:00.049000 audit[3076]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffd50e46b20 a2=0 a3=7ffd50e46b0c items=0 ppid=2938 pid=3076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:00.049000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 26 18:18:00.080000 audit[3082]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3082 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:18:00.080000 audit[3082]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffe7cc0f00 a2=0 a3=7fffe7cc0eec items=0 ppid=2938 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:00.080000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:18:00.091000 audit[3082]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3082 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:18:00.091000 audit[3082]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7fffe7cc0f00 a2=0 a3=7fffe7cc0eec items=0 ppid=2938 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:00.091000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:18:00.093000 audit[3087]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3087 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:18:00.093000 audit[3087]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffdb8aeb820 a2=0 a3=7ffdb8aeb80c items=0 ppid=2938 pid=3087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:00.093000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 26 18:18:00.099000 audit[3089]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3089 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:18:00.099000 audit[3089]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffcb7008410 a2=0 a3=7ffcb70083fc items=0 ppid=2938 pid=3089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:00.099000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 26 18:18:00.106000 audit[3092]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3092 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:18:00.106000 audit[3092]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffccd780a30 a2=0 a3=7ffccd780a1c items=0 ppid=2938 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:00.106000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 26 18:18:00.108000 audit[3093]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3093 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:18:00.108000 audit[3093]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffb67c09f0 a2=0 a3=7fffb67c09dc items=0 ppid=2938 pid=3093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:00.108000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 26 18:18:00.113000 audit[3095]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3095 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:18:00.113000 audit[3095]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff9a25abd0 a2=0 a3=7fff9a25abbc items=0 ppid=2938 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:00.113000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 26 18:18:00.116000 audit[3096]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3096 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:18:00.116000 audit[3096]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe28e36000 a2=0 a3=7ffe28e35fec items=0 ppid=2938 pid=3096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:00.116000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 26 18:18:00.121000 audit[3098]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3098 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:18:00.121000 audit[3098]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe15aa2040 a2=0 a3=7ffe15aa202c items=0 ppid=2938 pid=3098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:00.121000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 26 18:18:00.129000 audit[3101]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3101 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:18:00.129000 audit[3101]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffe764353d0 a2=0 a3=7ffe764353bc items=0 ppid=2938 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:00.129000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 26 18:18:00.132000 audit[3102]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3102 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:18:00.132000 audit[3102]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc0a539150 a2=0 a3=7ffc0a53913c items=0 ppid=2938 pid=3102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:00.132000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 26 18:18:00.137000 audit[3104]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3104 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:18:00.137000 audit[3104]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd4593b100 a2=0 a3=7ffd4593b0ec items=0 ppid=2938 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:00.137000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 26 18:18:00.139000 audit[3105]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3105 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:18:00.139000 audit[3105]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffce4e585e0 a2=0 a3=7ffce4e585cc items=0 ppid=2938 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:00.139000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 26 18:18:00.145000 audit[3107]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3107 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:18:00.145000 audit[3107]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe3f630bb0 a2=0 a3=7ffe3f630b9c items=0 ppid=2938 pid=3107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:00.145000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 26 18:18:00.153000 audit[3110]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3110 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:18:00.153000 audit[3110]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdcc093490 a2=0 a3=7ffdcc09347c items=0 ppid=2938 pid=3110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:00.153000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 26 18:18:00.161000 audit[3113]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3113 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:18:00.161000 audit[3113]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd1c46b530 a2=0 a3=7ffd1c46b51c items=0 ppid=2938 pid=3113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:00.161000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 26 18:18:00.164000 audit[3114]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3114 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:18:00.164000 audit[3114]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdfa668950 a2=0 a3=7ffdfa66893c items=0 ppid=2938 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:00.164000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 26 18:18:00.169000 audit[3116]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3116 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:18:00.169000 audit[3116]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fffc89e5060 a2=0 a3=7fffc89e504c items=0 ppid=2938 pid=3116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:00.169000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 26 18:18:00.176000 audit[3119]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3119 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:18:00.176000 audit[3119]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffc5f81d00 a2=0 a3=7fffc5f81cec items=0 ppid=2938 pid=3119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:00.176000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 26 18:18:00.179000 audit[3120]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3120 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:18:00.179000 audit[3120]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdc2a41a30 a2=0 a3=7ffdc2a41a1c items=0 ppid=2938 pid=3120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:00.179000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 26 18:18:00.185000 audit[3122]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3122 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:18:00.185000 audit[3122]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffcc17bf5e0 a2=0 a3=7ffcc17bf5cc items=0 ppid=2938 pid=3122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:00.185000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 26 18:18:00.187000 audit[3123]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3123 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:18:00.187000 audit[3123]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffdecc9f70 a2=0 a3=7fffdecc9f5c items=0 ppid=2938 pid=3123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:00.187000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 26 18:18:00.193000 audit[3125]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3125 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:18:00.193000 audit[3125]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe7a9b4480 a2=0 a3=7ffe7a9b446c items=0 ppid=2938 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:00.193000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 26 18:18:00.200000 audit[3128]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3128 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 26 18:18:00.200000 audit[3128]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd627555d0 a2=0 a3=7ffd627555bc items=0 ppid=2938 pid=3128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:00.200000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 26 18:18:00.206000 audit[3130]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3130 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 26 18:18:00.206000 audit[3130]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffd1e1e32d0 a2=0 a3=7ffd1e1e32bc items=0 ppid=2938 pid=3130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:00.206000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:18:00.207000 audit[3130]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3130 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 26 18:18:00.207000 audit[3130]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffd1e1e32d0 a2=0 a3=7ffd1e1e32bc items=0 ppid=2938 pid=3130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:00.207000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:18:00.292140 kubelet[2826]: E0126 18:18:00.292027 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:18:00.292140 kubelet[2826]: E0126 18:18:00.292037 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:18:00.320130 kubelet[2826]: I0126 18:18:00.319640 2826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-577hr" podStartSLOduration=2.319617115 podStartE2EDuration="2.319617115s" podCreationTimestamp="2026-01-26 18:17:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:18:00.319366327 +0000 UTC m=+7.250227069" watchObservedRunningTime="2026-01-26 18:18:00.319617115 +0000 UTC m=+7.250477817" Jan 26 18:18:00.798929 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3152962579.mount: Deactivated successfully. Jan 26 18:18:01.470188 containerd[1597]: time="2026-01-26T18:18:01.470114722Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:18:01.471041 containerd[1597]: time="2026-01-26T18:18:01.471003659Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 26 18:18:01.472579 containerd[1597]: time="2026-01-26T18:18:01.472446539Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:18:01.474675 containerd[1597]: time="2026-01-26T18:18:01.474584796Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:18:01.475276 containerd[1597]: time="2026-01-26T18:18:01.475179685Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.675923421s" Jan 26 18:18:01.475276 containerd[1597]: time="2026-01-26T18:18:01.475225220Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 26 18:18:01.478196 containerd[1597]: time="2026-01-26T18:18:01.478049650Z" level=info msg="CreateContainer within sandbox \"a1fa64bf8c5199fe417a457d0af517749996cfc232a1467d1ee379241ec06821\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 26 18:18:01.488137 containerd[1597]: time="2026-01-26T18:18:01.488081252Z" level=info msg="Container aa85b087939dae9d945e7db2089deba97ce38f9c9de9b5d7b746d6be17d41deb: CDI devices from CRI Config.CDIDevices: []" Jan 26 18:18:01.495458 containerd[1597]: time="2026-01-26T18:18:01.495375672Z" level=info msg="CreateContainer within sandbox \"a1fa64bf8c5199fe417a457d0af517749996cfc232a1467d1ee379241ec06821\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"aa85b087939dae9d945e7db2089deba97ce38f9c9de9b5d7b746d6be17d41deb\"" Jan 26 18:18:01.496098 containerd[1597]: time="2026-01-26T18:18:01.496079508Z" level=info msg="StartContainer for \"aa85b087939dae9d945e7db2089deba97ce38f9c9de9b5d7b746d6be17d41deb\"" Jan 26 18:18:01.497037 containerd[1597]: time="2026-01-26T18:18:01.496920107Z" level=info msg="connecting to shim aa85b087939dae9d945e7db2089deba97ce38f9c9de9b5d7b746d6be17d41deb" address="unix:///run/containerd/s/52b5e3c531d52b2e4cbbde30ea8d272abe123828f7d642454562cdfca29bf21c" protocol=ttrpc version=3 Jan 26 18:18:01.538101 systemd[1]: Started cri-containerd-aa85b087939dae9d945e7db2089deba97ce38f9c9de9b5d7b746d6be17d41deb.scope - libcontainer container aa85b087939dae9d945e7db2089deba97ce38f9c9de9b5d7b746d6be17d41deb. Jan 26 18:18:01.551000 audit: BPF prog-id=146 op=LOAD Jan 26 18:18:01.552000 audit: BPF prog-id=147 op=LOAD Jan 26 18:18:01.552000 audit[3139]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2964 pid=3139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:01.552000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161383562303837393339646165396439343565376462323038396465 Jan 26 18:18:01.552000 audit: BPF prog-id=147 op=UNLOAD Jan 26 18:18:01.552000 audit[3139]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2964 pid=3139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:01.552000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161383562303837393339646165396439343565376462323038396465 Jan 26 18:18:01.552000 audit: BPF prog-id=148 op=LOAD Jan 26 18:18:01.552000 audit[3139]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2964 pid=3139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:01.552000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161383562303837393339646165396439343565376462323038396465 Jan 26 18:18:01.552000 audit: BPF prog-id=149 op=LOAD Jan 26 18:18:01.552000 audit[3139]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2964 pid=3139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:01.552000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161383562303837393339646165396439343565376462323038396465 Jan 26 18:18:01.553000 audit: BPF prog-id=149 op=UNLOAD Jan 26 18:18:01.553000 audit[3139]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2964 pid=3139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:01.553000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161383562303837393339646165396439343565376462323038396465 Jan 26 18:18:01.553000 audit: BPF prog-id=148 op=UNLOAD Jan 26 18:18:01.553000 audit[3139]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2964 pid=3139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:01.553000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161383562303837393339646165396439343565376462323038396465 Jan 26 18:18:01.553000 audit: BPF prog-id=150 op=LOAD Jan 26 18:18:01.553000 audit[3139]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2964 pid=3139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:01.553000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161383562303837393339646165396439343565376462323038396465 Jan 26 18:18:01.586075 containerd[1597]: time="2026-01-26T18:18:01.586025753Z" level=info msg="StartContainer for \"aa85b087939dae9d945e7db2089deba97ce38f9c9de9b5d7b746d6be17d41deb\" returns successfully" Jan 26 18:18:03.563042 kubelet[2826]: E0126 18:18:03.562962 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:18:03.597212 kubelet[2826]: I0126 18:18:03.596550 2826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-dxbmj" podStartSLOduration=2.91490375 podStartE2EDuration="4.596535108s" podCreationTimestamp="2026-01-26 18:17:59 +0000 UTC" firstStartedPulling="2026-01-26 18:17:59.794613868 +0000 UTC m=+6.725474570" lastFinishedPulling="2026-01-26 18:18:01.476245216 +0000 UTC m=+8.407105928" observedRunningTime="2026-01-26 18:18:02.310337974 +0000 UTC m=+9.241198677" watchObservedRunningTime="2026-01-26 18:18:03.596535108 +0000 UTC m=+10.527395811" Jan 26 18:18:04.312032 kubelet[2826]: E0126 18:18:04.311943 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:18:05.318632 kubelet[2826]: E0126 18:18:05.318591 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:18:07.574016 sudo[1859]: pam_unix(sudo:session): session closed for user root Jan 26 18:18:07.588473 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 26 18:18:07.588568 kernel: audit: type=1106 audit(1769451487.572:513): pid=1859 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 26 18:18:07.572000 audit[1859]: USER_END pid=1859 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 26 18:18:07.598980 kernel: audit: type=1104 audit(1769451487.573:514): pid=1859 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 26 18:18:07.573000 audit[1859]: CRED_DISP pid=1859 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 26 18:18:07.603232 sshd[1858]: Connection closed by 10.0.0.1 port 43670 Jan 26 18:18:07.603666 sshd-session[1846]: pam_unix(sshd:session): session closed for user core Jan 26 18:18:07.606000 audit[1846]: USER_END pid=1846 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:07.622986 kernel: audit: type=1106 audit(1769451487.606:515): pid=1846 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:07.620000 audit[1846]: CRED_DISP pid=1846 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:07.625965 systemd[1]: sshd@8-10.0.0.64:22-10.0.0.1:43670.service: Deactivated successfully. Jan 26 18:18:07.636262 systemd[1]: session-10.scope: Deactivated successfully. Jan 26 18:18:07.637190 systemd[1]: session-10.scope: Consumed 7.942s CPU time, 218.8M memory peak. Jan 26 18:18:07.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.64:22-10.0.0.1:43670 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:18:07.642971 systemd-logind[1579]: Session 10 logged out. Waiting for processes to exit. Jan 26 18:18:07.645047 systemd-logind[1579]: Removed session 10. Jan 26 18:18:07.649982 kernel: audit: type=1104 audit(1769451487.620:516): pid=1846 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:07.650089 kernel: audit: type=1131 audit(1769451487.626:517): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.64:22-10.0.0.1:43670 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:18:07.961000 audit[3228]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3228 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:18:07.975022 kernel: audit: type=1325 audit(1769451487.961:518): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3228 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:18:07.961000 audit[3228]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd3794cef0 a2=0 a3=7ffd3794cedc items=0 ppid=2938 pid=3228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:07.961000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:18:08.003624 kernel: audit: type=1300 audit(1769451487.961:518): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd3794cef0 a2=0 a3=7ffd3794cedc items=0 ppid=2938 pid=3228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:08.003702 kernel: audit: type=1327 audit(1769451487.961:518): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:18:08.003727 kernel: audit: type=1325 audit(1769451487.976:519): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3228 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:18:07.976000 audit[3228]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3228 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:18:07.976000 audit[3228]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd3794cef0 a2=0 a3=0 items=0 ppid=2938 pid=3228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:07.976000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:18:08.027923 kernel: audit: type=1300 audit(1769451487.976:519): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd3794cef0 a2=0 a3=0 items=0 ppid=2938 pid=3228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:08.019000 audit[3230]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3230 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:18:08.019000 audit[3230]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc20711aa0 a2=0 a3=7ffc20711a8c items=0 ppid=2938 pid=3230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:08.019000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:18:08.032000 audit[3230]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3230 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:18:08.032000 audit[3230]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc20711aa0 a2=0 a3=0 items=0 ppid=2938 pid=3230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:08.032000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:18:10.565000 audit[3233]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3233 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:18:10.565000 audit[3233]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffdd39fe130 a2=0 a3=7ffdd39fe11c items=0 ppid=2938 pid=3233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:10.565000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:18:10.571000 audit[3233]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3233 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:18:10.571000 audit[3233]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdd39fe130 a2=0 a3=0 items=0 ppid=2938 pid=3233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:10.571000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:18:10.637000 audit[3235]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3235 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:18:10.637000 audit[3235]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffda9f83990 a2=0 a3=7ffda9f8397c items=0 ppid=2938 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:10.637000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:18:10.644000 audit[3235]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3235 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:18:10.644000 audit[3235]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffda9f83990 a2=0 a3=0 items=0 ppid=2938 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:10.644000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:18:11.663000 audit[3237]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3237 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:18:11.663000 audit[3237]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc5ce04040 a2=0 a3=7ffc5ce0402c items=0 ppid=2938 pid=3237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:11.663000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:18:11.671000 audit[3237]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3237 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:18:11.671000 audit[3237]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc5ce04040 a2=0 a3=0 items=0 ppid=2938 pid=3237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:11.671000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:18:12.454203 systemd[1]: Created slice kubepods-besteffort-pod14cb4524_06cf_4dce_8dbc_2ee84c99e9aa.slice - libcontainer container kubepods-besteffort-pod14cb4524_06cf_4dce_8dbc_2ee84c99e9aa.slice. Jan 26 18:18:12.469649 kubelet[2826]: I0126 18:18:12.468550 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcx7f\" (UniqueName: \"kubernetes.io/projected/14cb4524-06cf-4dce-8dbc-2ee84c99e9aa-kube-api-access-pcx7f\") pod \"calico-typha-8f7f497f-86mm5\" (UID: \"14cb4524-06cf-4dce-8dbc-2ee84c99e9aa\") " pod="calico-system/calico-typha-8f7f497f-86mm5" Jan 26 18:18:12.469649 kubelet[2826]: I0126 18:18:12.468719 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/14cb4524-06cf-4dce-8dbc-2ee84c99e9aa-typha-certs\") pod \"calico-typha-8f7f497f-86mm5\" (UID: \"14cb4524-06cf-4dce-8dbc-2ee84c99e9aa\") " pod="calico-system/calico-typha-8f7f497f-86mm5" Jan 26 18:18:12.469649 kubelet[2826]: I0126 18:18:12.468746 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14cb4524-06cf-4dce-8dbc-2ee84c99e9aa-tigera-ca-bundle\") pod \"calico-typha-8f7f497f-86mm5\" (UID: \"14cb4524-06cf-4dce-8dbc-2ee84c99e9aa\") " pod="calico-system/calico-typha-8f7f497f-86mm5" Jan 26 18:18:12.741024 systemd[1]: Created slice kubepods-besteffort-pod6199bf05_5e4c_4a1f_a96f_15ddd98af440.slice - libcontainer container kubepods-besteffort-pod6199bf05_5e4c_4a1f_a96f_15ddd98af440.slice. Jan 26 18:18:12.773211 kubelet[2826]: I0126 18:18:12.773070 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6199bf05-5e4c-4a1f-a96f-15ddd98af440-flexvol-driver-host\") pod \"calico-node-k6nmt\" (UID: \"6199bf05-5e4c-4a1f-a96f-15ddd98af440\") " pod="calico-system/calico-node-k6nmt" Jan 26 18:18:12.773211 kubelet[2826]: I0126 18:18:12.773189 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6199bf05-5e4c-4a1f-a96f-15ddd98af440-tigera-ca-bundle\") pod \"calico-node-k6nmt\" (UID: \"6199bf05-5e4c-4a1f-a96f-15ddd98af440\") " pod="calico-system/calico-node-k6nmt" Jan 26 18:18:12.773211 kubelet[2826]: I0126 18:18:12.773321 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6199bf05-5e4c-4a1f-a96f-15ddd98af440-node-certs\") pod \"calico-node-k6nmt\" (UID: \"6199bf05-5e4c-4a1f-a96f-15ddd98af440\") " pod="calico-system/calico-node-k6nmt" Jan 26 18:18:12.773211 kubelet[2826]: I0126 18:18:12.773336 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6199bf05-5e4c-4a1f-a96f-15ddd98af440-var-run-calico\") pod \"calico-node-k6nmt\" (UID: \"6199bf05-5e4c-4a1f-a96f-15ddd98af440\") " pod="calico-system/calico-node-k6nmt" Jan 26 18:18:12.773211 kubelet[2826]: I0126 18:18:12.773353 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6199bf05-5e4c-4a1f-a96f-15ddd98af440-xtables-lock\") pod \"calico-node-k6nmt\" (UID: \"6199bf05-5e4c-4a1f-a96f-15ddd98af440\") " pod="calico-system/calico-node-k6nmt" Jan 26 18:18:12.774139 kubelet[2826]: I0126 18:18:12.773370 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6199bf05-5e4c-4a1f-a96f-15ddd98af440-cni-bin-dir\") pod \"calico-node-k6nmt\" (UID: \"6199bf05-5e4c-4a1f-a96f-15ddd98af440\") " pod="calico-system/calico-node-k6nmt" Jan 26 18:18:12.774139 kubelet[2826]: I0126 18:18:12.773384 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsmg5\" (UniqueName: \"kubernetes.io/projected/6199bf05-5e4c-4a1f-a96f-15ddd98af440-kube-api-access-dsmg5\") pod \"calico-node-k6nmt\" (UID: \"6199bf05-5e4c-4a1f-a96f-15ddd98af440\") " pod="calico-system/calico-node-k6nmt" Jan 26 18:18:12.774139 kubelet[2826]: I0126 18:18:12.773397 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6199bf05-5e4c-4a1f-a96f-15ddd98af440-policysync\") pod \"calico-node-k6nmt\" (UID: \"6199bf05-5e4c-4a1f-a96f-15ddd98af440\") " pod="calico-system/calico-node-k6nmt" Jan 26 18:18:12.774139 kubelet[2826]: I0126 18:18:12.773409 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6199bf05-5e4c-4a1f-a96f-15ddd98af440-cni-log-dir\") pod \"calico-node-k6nmt\" (UID: \"6199bf05-5e4c-4a1f-a96f-15ddd98af440\") " pod="calico-system/calico-node-k6nmt" Jan 26 18:18:12.774139 kubelet[2826]: I0126 18:18:12.773444 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6199bf05-5e4c-4a1f-a96f-15ddd98af440-var-lib-calico\") pod \"calico-node-k6nmt\" (UID: \"6199bf05-5e4c-4a1f-a96f-15ddd98af440\") " pod="calico-system/calico-node-k6nmt" Jan 26 18:18:12.776364 kubelet[2826]: I0126 18:18:12.773461 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6199bf05-5e4c-4a1f-a96f-15ddd98af440-cni-net-dir\") pod \"calico-node-k6nmt\" (UID: \"6199bf05-5e4c-4a1f-a96f-15ddd98af440\") " pod="calico-system/calico-node-k6nmt" Jan 26 18:18:12.776364 kubelet[2826]: I0126 18:18:12.773473 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6199bf05-5e4c-4a1f-a96f-15ddd98af440-lib-modules\") pod \"calico-node-k6nmt\" (UID: \"6199bf05-5e4c-4a1f-a96f-15ddd98af440\") " pod="calico-system/calico-node-k6nmt" Jan 26 18:18:12.776364 kubelet[2826]: E0126 18:18:12.773644 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:18:12.780048 containerd[1597]: time="2026-01-26T18:18:12.779805092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8f7f497f-86mm5,Uid:14cb4524-06cf-4dce-8dbc-2ee84c99e9aa,Namespace:calico-system,Attempt:0,}" Jan 26 18:18:12.888371 containerd[1597]: time="2026-01-26T18:18:12.888180353Z" level=info msg="connecting to shim 054cfab18439e5eb44c1c070b60760a3127fe576dfae64af2f41693cfdcf0294" address="unix:///run/containerd/s/223e2a27fc591da0e513db7104091c064b13ca9c0300cfbe4d38a1fca8eda0a1" namespace=k8s.io protocol=ttrpc version=3 Jan 26 18:18:12.930933 kubelet[2826]: E0126 18:18:12.929196 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:12.939919 kubelet[2826]: W0126 18:18:12.939159 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:12.939919 kubelet[2826]: E0126 18:18:12.939514 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:12.944294 kubelet[2826]: E0126 18:18:12.943487 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:12.945009 kubelet[2826]: W0126 18:18:12.944744 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:12.945144 kubelet[2826]: E0126 18:18:12.945111 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:12.962434 kubelet[2826]: E0126 18:18:12.961222 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:12.963031 kubelet[2826]: W0126 18:18:12.962099 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:12.963031 kubelet[2826]: E0126 18:18:12.962929 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:12.997694 kernel: kauditd_printk_skb: 25 callbacks suppressed Jan 26 18:18:12.998162 kernel: audit: type=1325 audit(1769451492.984:528): table=filter:115 family=2 entries=21 op=nft_register_rule pid=3263 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:18:12.984000 audit[3263]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3263 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:18:12.984000 audit[3263]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc9717efb0 a2=0 a3=7ffc9717ef9c items=0 ppid=2938 pid=3263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:13.029040 kernel: audit: type=1300 audit(1769451492.984:528): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc9717efb0 a2=0 a3=7ffc9717ef9c items=0 ppid=2938 pid=3263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:13.029177 kernel: audit: type=1327 audit(1769451492.984:528): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:18:12.984000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:18:13.000000 audit[3263]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3263 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:18:13.044404 kernel: audit: type=1325 audit(1769451493.000:529): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3263 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:18:13.046220 kernel: audit: type=1300 audit(1769451493.000:529): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc9717efb0 a2=0 a3=0 items=0 ppid=2938 pid=3263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:13.000000 audit[3263]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc9717efb0 a2=0 a3=0 items=0 ppid=2938 pid=3263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:13.069999 kernel: audit: type=1327 audit(1769451493.000:529): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:18:13.000000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:18:13.071007 kubelet[2826]: E0126 18:18:13.067050 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:18:13.077182 containerd[1597]: time="2026-01-26T18:18:13.076247076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-k6nmt,Uid:6199bf05-5e4c-4a1f-a96f-15ddd98af440,Namespace:calico-system,Attempt:0,}" Jan 26 18:18:13.100594 systemd[1]: Started cri-containerd-054cfab18439e5eb44c1c070b60760a3127fe576dfae64af2f41693cfdcf0294.scope - libcontainer container 054cfab18439e5eb44c1c070b60760a3127fe576dfae64af2f41693cfdcf0294. Jan 26 18:18:13.172099 containerd[1597]: time="2026-01-26T18:18:13.171064526Z" level=info msg="connecting to shim a908b8c501289bbc3e14f7a897292cc6d927a62a1cb47d3ff3c9105cd0143241" address="unix:///run/containerd/s/633d715e13537ad55142537de9e680e446935053766c87ac055f23dbcef5deb1" namespace=k8s.io protocol=ttrpc version=3 Jan 26 18:18:13.185211 kernel: audit: type=1334 audit(1769451493.178:530): prog-id=151 op=LOAD Jan 26 18:18:13.178000 audit: BPF prog-id=151 op=LOAD Jan 26 18:18:13.190000 audit: BPF prog-id=152 op=LOAD Jan 26 18:18:13.198924 kernel: audit: type=1334 audit(1769451493.190:531): prog-id=152 op=LOAD Jan 26 18:18:13.190000 audit[3266]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3248 pid=3266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:13.218088 kernel: audit: type=1300 audit(1769451493.190:531): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3248 pid=3266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:13.229323 kernel: audit: type=1327 audit(1769451493.190:531): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035346366616231383433396535656234346331633037306236303736 Jan 26 18:18:13.190000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035346366616231383433396535656234346331633037306236303736 Jan 26 18:18:13.197000 audit: BPF prog-id=152 op=UNLOAD Jan 26 18:18:13.197000 audit[3266]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3248 pid=3266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:13.197000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035346366616231383433396535656234346331633037306236303736 Jan 26 18:18:13.198000 audit: BPF prog-id=153 op=LOAD Jan 26 18:18:13.198000 audit[3266]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3248 pid=3266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:13.198000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035346366616231383433396535656234346331633037306236303736 Jan 26 18:18:13.198000 audit: BPF prog-id=154 op=LOAD Jan 26 18:18:13.198000 audit[3266]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3248 pid=3266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:13.198000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035346366616231383433396535656234346331633037306236303736 Jan 26 18:18:13.198000 audit: BPF prog-id=154 op=UNLOAD Jan 26 18:18:13.198000 audit[3266]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3248 pid=3266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:13.198000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035346366616231383433396535656234346331633037306236303736 Jan 26 18:18:13.201000 audit: BPF prog-id=153 op=UNLOAD Jan 26 18:18:13.201000 audit[3266]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3248 pid=3266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:13.201000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035346366616231383433396535656234346331633037306236303736 Jan 26 18:18:13.201000 audit: BPF prog-id=155 op=LOAD Jan 26 18:18:13.201000 audit[3266]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3248 pid=3266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:13.201000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035346366616231383433396535656234346331633037306236303736 Jan 26 18:18:13.344232 kubelet[2826]: E0126 18:18:13.344105 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x4jch" podUID="6589570b-6489-4043-b23c-e5a49733eb4e" Jan 26 18:18:13.344217 systemd[1]: Started cri-containerd-a908b8c501289bbc3e14f7a897292cc6d927a62a1cb47d3ff3c9105cd0143241.scope - libcontainer container a908b8c501289bbc3e14f7a897292cc6d927a62a1cb47d3ff3c9105cd0143241. Jan 26 18:18:13.375309 kubelet[2826]: E0126 18:18:13.375138 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.375309 kubelet[2826]: W0126 18:18:13.375264 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.375490 kubelet[2826]: E0126 18:18:13.375291 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.378061 kubelet[2826]: E0126 18:18:13.377732 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.378061 kubelet[2826]: W0126 18:18:13.377902 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.378061 kubelet[2826]: E0126 18:18:13.377917 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.378702 kubelet[2826]: E0126 18:18:13.378585 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.378746 kubelet[2826]: W0126 18:18:13.378720 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.378746 kubelet[2826]: E0126 18:18:13.378733 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.379599 kubelet[2826]: E0126 18:18:13.379469 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.379599 kubelet[2826]: W0126 18:18:13.379515 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.379599 kubelet[2826]: E0126 18:18:13.379526 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.379917 kubelet[2826]: E0126 18:18:13.379728 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.379917 kubelet[2826]: W0126 18:18:13.379873 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.379917 kubelet[2826]: E0126 18:18:13.379884 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.381273 kubelet[2826]: E0126 18:18:13.381086 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.381273 kubelet[2826]: W0126 18:18:13.381132 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.381273 kubelet[2826]: E0126 18:18:13.381143 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.381703 kubelet[2826]: E0126 18:18:13.381337 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.381703 kubelet[2826]: W0126 18:18:13.381346 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.381703 kubelet[2826]: E0126 18:18:13.381354 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.381703 kubelet[2826]: E0126 18:18:13.381568 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.381703 kubelet[2826]: W0126 18:18:13.381576 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.381703 kubelet[2826]: E0126 18:18:13.381584 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.381943 kubelet[2826]: E0126 18:18:13.381897 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.381943 kubelet[2826]: W0126 18:18:13.381907 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.381943 kubelet[2826]: E0126 18:18:13.381915 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.382326 kubelet[2826]: E0126 18:18:13.382192 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.382326 kubelet[2826]: W0126 18:18:13.382235 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.382326 kubelet[2826]: E0126 18:18:13.382245 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.383067 kubelet[2826]: E0126 18:18:13.382913 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.383067 kubelet[2826]: W0126 18:18:13.382961 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.383067 kubelet[2826]: E0126 18:18:13.382971 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.383676 kubelet[2826]: E0126 18:18:13.383651 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.384040 kubelet[2826]: W0126 18:18:13.383942 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.384396 kubelet[2826]: E0126 18:18:13.384216 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.385554 kubelet[2826]: E0126 18:18:13.385498 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.385554 kubelet[2826]: W0126 18:18:13.385547 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.385554 kubelet[2826]: E0126 18:18:13.385557 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.386023 kubelet[2826]: E0126 18:18:13.385939 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.386023 kubelet[2826]: W0126 18:18:13.385953 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.386023 kubelet[2826]: E0126 18:18:13.385961 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.387532 kubelet[2826]: E0126 18:18:13.387398 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.387532 kubelet[2826]: W0126 18:18:13.387456 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.387532 kubelet[2826]: E0126 18:18:13.387471 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.388404 kubelet[2826]: E0126 18:18:13.388217 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.388404 kubelet[2826]: W0126 18:18:13.388276 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.388404 kubelet[2826]: E0126 18:18:13.388290 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.388962 kubelet[2826]: E0126 18:18:13.388723 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.388962 kubelet[2826]: W0126 18:18:13.388814 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.388962 kubelet[2826]: E0126 18:18:13.388947 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.389877 kubelet[2826]: E0126 18:18:13.389706 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.389877 kubelet[2826]: W0126 18:18:13.389799 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.389877 kubelet[2826]: E0126 18:18:13.389811 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.390197 kubelet[2826]: E0126 18:18:13.390151 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.390197 kubelet[2826]: W0126 18:18:13.390164 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.390304 kubelet[2826]: E0126 18:18:13.390237 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.390476 kubelet[2826]: E0126 18:18:13.390448 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.390476 kubelet[2826]: W0126 18:18:13.390462 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.390476 kubelet[2826]: E0126 18:18:13.390471 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.391990 kubelet[2826]: E0126 18:18:13.391738 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.391990 kubelet[2826]: W0126 18:18:13.391810 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.391990 kubelet[2826]: E0126 18:18:13.391922 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.392300 kubelet[2826]: I0126 18:18:13.391987 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6589570b-6489-4043-b23c-e5a49733eb4e-kubelet-dir\") pod \"csi-node-driver-x4jch\" (UID: \"6589570b-6489-4043-b23c-e5a49733eb4e\") " pod="calico-system/csi-node-driver-x4jch" Jan 26 18:18:13.392966 kubelet[2826]: E0126 18:18:13.392691 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.392966 kubelet[2826]: W0126 18:18:13.392740 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.392966 kubelet[2826]: E0126 18:18:13.392922 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.393506 kubelet[2826]: E0126 18:18:13.393474 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.393506 kubelet[2826]: W0126 18:18:13.393487 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.393879 kubelet[2826]: E0126 18:18:13.393653 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.394750 kubelet[2826]: E0126 18:18:13.394611 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.394750 kubelet[2826]: W0126 18:18:13.394625 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.394750 kubelet[2826]: E0126 18:18:13.394634 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.395136 kubelet[2826]: I0126 18:18:13.394702 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6589570b-6489-4043-b23c-e5a49733eb4e-socket-dir\") pod \"csi-node-driver-x4jch\" (UID: \"6589570b-6489-4043-b23c-e5a49733eb4e\") " pod="calico-system/csi-node-driver-x4jch" Jan 26 18:18:13.395136 kubelet[2826]: E0126 18:18:13.395079 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.395136 kubelet[2826]: W0126 18:18:13.395086 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.395136 kubelet[2826]: E0126 18:18:13.395097 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.395646 kubelet[2826]: E0126 18:18:13.395403 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.395646 kubelet[2826]: W0126 18:18:13.395411 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.395646 kubelet[2826]: E0126 18:18:13.395470 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.400116 kubelet[2826]: E0126 18:18:13.400053 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.400116 kubelet[2826]: W0126 18:18:13.400100 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.400116 kubelet[2826]: E0126 18:18:13.400111 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.400233 kubelet[2826]: I0126 18:18:13.400131 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6589570b-6489-4043-b23c-e5a49733eb4e-varrun\") pod \"csi-node-driver-x4jch\" (UID: \"6589570b-6489-4043-b23c-e5a49733eb4e\") " pod="calico-system/csi-node-driver-x4jch" Jan 26 18:18:13.402058 kubelet[2826]: E0126 18:18:13.401948 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.402058 kubelet[2826]: W0126 18:18:13.401997 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.402058 kubelet[2826]: E0126 18:18:13.402013 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.402058 kubelet[2826]: I0126 18:18:13.402030 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6589570b-6489-4043-b23c-e5a49733eb4e-registration-dir\") pod \"csi-node-driver-x4jch\" (UID: \"6589570b-6489-4043-b23c-e5a49733eb4e\") " pod="calico-system/csi-node-driver-x4jch" Jan 26 18:18:13.402685 kubelet[2826]: E0126 18:18:13.402630 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.402685 kubelet[2826]: W0126 18:18:13.402644 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.403802 kubelet[2826]: E0126 18:18:13.403649 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.403802 kubelet[2826]: W0126 18:18:13.403698 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.404037 kubelet[2826]: E0126 18:18:13.403886 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.404037 kubelet[2826]: I0126 18:18:13.403907 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9st6\" (UniqueName: \"kubernetes.io/projected/6589570b-6489-4043-b23c-e5a49733eb4e-kube-api-access-s9st6\") pod \"csi-node-driver-x4jch\" (UID: \"6589570b-6489-4043-b23c-e5a49733eb4e\") " pod="calico-system/csi-node-driver-x4jch" Jan 26 18:18:13.404941 kubelet[2826]: E0126 18:18:13.404168 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.404941 kubelet[2826]: E0126 18:18:13.404551 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.404941 kubelet[2826]: W0126 18:18:13.404561 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.404941 kubelet[2826]: E0126 18:18:13.404729 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.409227 kubelet[2826]: E0126 18:18:13.409142 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.409227 kubelet[2826]: W0126 18:18:13.409202 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.409227 kubelet[2826]: E0126 18:18:13.409214 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.425150 kubelet[2826]: E0126 18:18:13.423980 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.425150 kubelet[2826]: W0126 18:18:13.424006 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.425150 kubelet[2826]: E0126 18:18:13.424578 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.431163 kubelet[2826]: E0126 18:18:13.430632 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.431163 kubelet[2826]: W0126 18:18:13.430890 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.431163 kubelet[2826]: E0126 18:18:13.430913 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.433439 kubelet[2826]: E0126 18:18:13.433423 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.433721 kubelet[2826]: W0126 18:18:13.433705 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.434673 kubelet[2826]: E0126 18:18:13.434052 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.465000 audit: BPF prog-id=156 op=LOAD Jan 26 18:18:13.466000 audit: BPF prog-id=157 op=LOAD Jan 26 18:18:13.466000 audit[3303]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3292 pid=3303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:13.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139303862386335303132383962626333653134663761383937323932 Jan 26 18:18:13.466000 audit: BPF prog-id=157 op=UNLOAD Jan 26 18:18:13.466000 audit[3303]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3292 pid=3303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:13.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139303862386335303132383962626333653134663761383937323932 Jan 26 18:18:13.466000 audit: BPF prog-id=158 op=LOAD Jan 26 18:18:13.466000 audit[3303]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3292 pid=3303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:13.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139303862386335303132383962626333653134663761383937323932 Jan 26 18:18:13.466000 audit: BPF prog-id=159 op=LOAD Jan 26 18:18:13.466000 audit[3303]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3292 pid=3303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:13.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139303862386335303132383962626333653134663761383937323932 Jan 26 18:18:13.466000 audit: BPF prog-id=159 op=UNLOAD Jan 26 18:18:13.466000 audit[3303]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3292 pid=3303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:13.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139303862386335303132383962626333653134663761383937323932 Jan 26 18:18:13.466000 audit: BPF prog-id=158 op=UNLOAD Jan 26 18:18:13.466000 audit[3303]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3292 pid=3303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:13.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139303862386335303132383962626333653134663761383937323932 Jan 26 18:18:13.466000 audit: BPF prog-id=160 op=LOAD Jan 26 18:18:13.466000 audit[3303]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3292 pid=3303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:13.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139303862386335303132383962626333653134663761383937323932 Jan 26 18:18:13.505600 kubelet[2826]: E0126 18:18:13.505572 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.506670 kubelet[2826]: W0126 18:18:13.506229 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.506670 kubelet[2826]: E0126 18:18:13.506505 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.508812 kubelet[2826]: E0126 18:18:13.508717 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.508812 kubelet[2826]: W0126 18:18:13.508738 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.509505 kubelet[2826]: E0126 18:18:13.509437 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.512930 kubelet[2826]: E0126 18:18:13.511083 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.512930 kubelet[2826]: W0126 18:18:13.511098 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.514520 kubelet[2826]: E0126 18:18:13.514290 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.523975 kubelet[2826]: E0126 18:18:13.523311 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.523975 kubelet[2826]: W0126 18:18:13.523610 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.528235 containerd[1597]: time="2026-01-26T18:18:13.526056102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8f7f497f-86mm5,Uid:14cb4524-06cf-4dce-8dbc-2ee84c99e9aa,Namespace:calico-system,Attempt:0,} returns sandbox id \"054cfab18439e5eb44c1c070b60760a3127fe576dfae64af2f41693cfdcf0294\"" Jan 26 18:18:13.529060 kubelet[2826]: E0126 18:18:13.528534 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.529624 kubelet[2826]: E0126 18:18:13.529443 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:18:13.530196 kubelet[2826]: E0126 18:18:13.529601 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.530350 kubelet[2826]: W0126 18:18:13.530305 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.530394 kubelet[2826]: E0126 18:18:13.530355 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.532891 kubelet[2826]: E0126 18:18:13.531495 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.532891 kubelet[2826]: W0126 18:18:13.531509 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.532891 kubelet[2826]: E0126 18:18:13.531908 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.532891 kubelet[2826]: E0126 18:18:13.532550 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.533050 kubelet[2826]: W0126 18:18:13.532813 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.533070 kubelet[2826]: E0126 18:18:13.533052 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.535477 kubelet[2826]: E0126 18:18:13.535350 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.535477 kubelet[2826]: W0126 18:18:13.535393 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.535947 kubelet[2826]: E0126 18:18:13.535727 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.536402 kubelet[2826]: E0126 18:18:13.536305 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.536500 kubelet[2826]: W0126 18:18:13.536441 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.537919 kubelet[2826]: E0126 18:18:13.536920 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.538075 kubelet[2826]: E0126 18:18:13.537999 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.538075 kubelet[2826]: W0126 18:18:13.538009 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.538316 kubelet[2826]: E0126 18:18:13.538167 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.539730 kubelet[2826]: E0126 18:18:13.539216 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.539730 kubelet[2826]: W0126 18:18:13.539229 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.539730 kubelet[2826]: E0126 18:18:13.539516 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.542543 kubelet[2826]: E0126 18:18:13.542390 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.542543 kubelet[2826]: W0126 18:18:13.542437 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.542543 kubelet[2826]: E0126 18:18:13.542491 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.543247 kubelet[2826]: E0126 18:18:13.543120 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.543247 kubelet[2826]: W0126 18:18:13.543166 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.543887 kubelet[2826]: E0126 18:18:13.543869 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.545073 kubelet[2826]: E0126 18:18:13.544912 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.545574 kubelet[2826]: W0126 18:18:13.545560 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.545861 kubelet[2826]: E0126 18:18:13.545700 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.546402 kubelet[2826]: E0126 18:18:13.546270 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.546402 kubelet[2826]: W0126 18:18:13.546313 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.546589 kubelet[2826]: E0126 18:18:13.546556 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.552099 kubelet[2826]: E0126 18:18:13.551979 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.552099 kubelet[2826]: W0126 18:18:13.552099 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.552611 kubelet[2826]: E0126 18:18:13.552332 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.553244 kubelet[2826]: E0126 18:18:13.553231 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.553244 kubelet[2826]: W0126 18:18:13.553242 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.553393 kubelet[2826]: E0126 18:18:13.553291 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.553648 containerd[1597]: time="2026-01-26T18:18:13.553582773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 26 18:18:13.556004 kubelet[2826]: E0126 18:18:13.555751 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.556466 kubelet[2826]: W0126 18:18:13.556262 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.556679 kubelet[2826]: E0126 18:18:13.556578 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.559387 kubelet[2826]: E0126 18:18:13.559216 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.559387 kubelet[2826]: W0126 18:18:13.559266 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.559468 kubelet[2826]: E0126 18:18:13.559413 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.563173 kubelet[2826]: E0126 18:18:13.563133 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.563173 kubelet[2826]: W0126 18:18:13.563176 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.563469 kubelet[2826]: E0126 18:18:13.563308 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.564264 kubelet[2826]: E0126 18:18:13.564210 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.564264 kubelet[2826]: W0126 18:18:13.564248 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.564414 kubelet[2826]: E0126 18:18:13.564384 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.566246 kubelet[2826]: E0126 18:18:13.566180 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.566918 kubelet[2826]: W0126 18:18:13.566713 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.569044 kubelet[2826]: E0126 18:18:13.568945 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.570110 kubelet[2826]: E0126 18:18:13.569434 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.570110 kubelet[2826]: W0126 18:18:13.570003 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.571233 kubelet[2826]: E0126 18:18:13.571088 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.571233 kubelet[2826]: E0126 18:18:13.571218 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.572058 kubelet[2826]: W0126 18:18:13.571695 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.572225 containerd[1597]: time="2026-01-26T18:18:13.571505648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-k6nmt,Uid:6199bf05-5e4c-4a1f-a96f-15ddd98af440,Namespace:calico-system,Attempt:0,} returns sandbox id \"a908b8c501289bbc3e14f7a897292cc6d927a62a1cb47d3ff3c9105cd0143241\"" Jan 26 18:18:13.572490 kubelet[2826]: E0126 18:18:13.571719 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.573948 kubelet[2826]: E0126 18:18:13.573692 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.574227 kubelet[2826]: W0126 18:18:13.574205 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.574570 kubelet[2826]: E0126 18:18:13.574556 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:13.574951 kubelet[2826]: E0126 18:18:13.573751 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:18:13.588409 kubelet[2826]: E0126 18:18:13.588341 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:13.588409 kubelet[2826]: W0126 18:18:13.588423 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:13.589223 kubelet[2826]: E0126 18:18:13.588530 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:14.386565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount608071275.mount: Deactivated successfully. Jan 26 18:18:15.259490 kubelet[2826]: E0126 18:18:15.258400 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x4jch" podUID="6589570b-6489-4043-b23c-e5a49733eb4e" Jan 26 18:18:17.249305 kubelet[2826]: E0126 18:18:17.244241 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x4jch" podUID="6589570b-6489-4043-b23c-e5a49733eb4e" Jan 26 18:18:19.110644 containerd[1597]: time="2026-01-26T18:18:19.110571492Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:18:19.111682 containerd[1597]: time="2026-01-26T18:18:19.111592657Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33738263" Jan 26 18:18:19.113318 containerd[1597]: time="2026-01-26T18:18:19.113193414Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:18:19.116241 containerd[1597]: time="2026-01-26T18:18:19.116122639Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:18:19.117275 containerd[1597]: time="2026-01-26T18:18:19.117158441Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 5.563504595s" Jan 26 18:18:19.117275 containerd[1597]: time="2026-01-26T18:18:19.117218223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 26 18:18:19.122338 containerd[1597]: time="2026-01-26T18:18:19.121967564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 26 18:18:19.140088 containerd[1597]: time="2026-01-26T18:18:19.139927655Z" level=info msg="CreateContainer within sandbox \"054cfab18439e5eb44c1c070b60760a3127fe576dfae64af2f41693cfdcf0294\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 26 18:18:19.157117 containerd[1597]: time="2026-01-26T18:18:19.157014818Z" level=info msg="Container a652bf8a2d0c2fe138944e43ccd0672f627b038767ae5341b823c2853ff93879: CDI devices from CRI Config.CDIDevices: []" Jan 26 18:18:19.167146 containerd[1597]: time="2026-01-26T18:18:19.167045056Z" level=info msg="CreateContainer within sandbox \"054cfab18439e5eb44c1c070b60760a3127fe576dfae64af2f41693cfdcf0294\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a652bf8a2d0c2fe138944e43ccd0672f627b038767ae5341b823c2853ff93879\"" Jan 26 18:18:19.168268 containerd[1597]: time="2026-01-26T18:18:19.168248863Z" level=info msg="StartContainer for \"a652bf8a2d0c2fe138944e43ccd0672f627b038767ae5341b823c2853ff93879\"" Jan 26 18:18:19.170024 containerd[1597]: time="2026-01-26T18:18:19.169930209Z" level=info msg="connecting to shim a652bf8a2d0c2fe138944e43ccd0672f627b038767ae5341b823c2853ff93879" address="unix:///run/containerd/s/223e2a27fc591da0e513db7104091c064b13ca9c0300cfbe4d38a1fca8eda0a1" protocol=ttrpc version=3 Jan 26 18:18:19.205356 systemd[1]: Started cri-containerd-a652bf8a2d0c2fe138944e43ccd0672f627b038767ae5341b823c2853ff93879.scope - libcontainer container a652bf8a2d0c2fe138944e43ccd0672f627b038767ae5341b823c2853ff93879. Jan 26 18:18:19.227000 audit: BPF prog-id=161 op=LOAD Jan 26 18:18:19.232067 kernel: kauditd_printk_skb: 40 callbacks suppressed Jan 26 18:18:19.232187 kernel: audit: type=1334 audit(1769451499.227:546): prog-id=161 op=LOAD Jan 26 18:18:19.234701 kubelet[2826]: E0126 18:18:19.234595 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x4jch" podUID="6589570b-6489-4043-b23c-e5a49733eb4e" Jan 26 18:18:19.229000 audit: BPF prog-id=162 op=LOAD Jan 26 18:18:19.238961 kernel: audit: type=1334 audit(1769451499.229:547): prog-id=162 op=LOAD Jan 26 18:18:19.229000 audit[3419]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3248 pid=3419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:19.255148 kernel: audit: type=1300 audit(1769451499.229:547): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3248 pid=3419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:19.229000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136353262663861326430633266653133383934346534336363643036 Jan 26 18:18:19.270543 kernel: audit: type=1327 audit(1769451499.229:547): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136353262663861326430633266653133383934346534336363643036 Jan 26 18:18:19.274147 kernel: audit: type=1334 audit(1769451499.229:548): prog-id=162 op=UNLOAD Jan 26 18:18:19.229000 audit: BPF prog-id=162 op=UNLOAD Jan 26 18:18:19.286899 kernel: audit: type=1300 audit(1769451499.229:548): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3248 pid=3419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:19.229000 audit[3419]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3248 pid=3419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:19.229000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136353262663861326430633266653133383934346534336363643036 Jan 26 18:18:19.299664 kernel: audit: type=1327 audit(1769451499.229:548): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136353262663861326430633266653133383934346534336363643036 Jan 26 18:18:19.299769 kernel: audit: type=1334 audit(1769451499.229:549): prog-id=163 op=LOAD Jan 26 18:18:19.229000 audit: BPF prog-id=163 op=LOAD Jan 26 18:18:19.302935 kernel: audit: type=1300 audit(1769451499.229:549): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3248 pid=3419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:19.229000 audit[3419]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3248 pid=3419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:19.316055 kernel: audit: type=1327 audit(1769451499.229:549): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136353262663861326430633266653133383934346534336363643036 Jan 26 18:18:19.229000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136353262663861326430633266653133383934346534336363643036 Jan 26 18:18:19.229000 audit: BPF prog-id=164 op=LOAD Jan 26 18:18:19.229000 audit[3419]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3248 pid=3419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:19.229000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136353262663861326430633266653133383934346534336363643036 Jan 26 18:18:19.230000 audit: BPF prog-id=164 op=UNLOAD Jan 26 18:18:19.230000 audit[3419]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3248 pid=3419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:19.230000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136353262663861326430633266653133383934346534336363643036 Jan 26 18:18:19.230000 audit: BPF prog-id=163 op=UNLOAD Jan 26 18:18:19.230000 audit[3419]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3248 pid=3419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:19.230000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136353262663861326430633266653133383934346534336363643036 Jan 26 18:18:19.230000 audit: BPF prog-id=165 op=LOAD Jan 26 18:18:19.230000 audit[3419]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3248 pid=3419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:19.230000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136353262663861326430633266653133383934346534336363643036 Jan 26 18:18:19.335583 containerd[1597]: time="2026-01-26T18:18:19.335491989Z" level=info msg="StartContainer for \"a652bf8a2d0c2fe138944e43ccd0672f627b038767ae5341b823c2853ff93879\" returns successfully" Jan 26 18:18:19.862114 containerd[1597]: time="2026-01-26T18:18:19.862052730Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:18:19.863198 containerd[1597]: time="2026-01-26T18:18:19.863092259Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 26 18:18:19.864552 containerd[1597]: time="2026-01-26T18:18:19.864454030Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:18:19.866918 containerd[1597]: time="2026-01-26T18:18:19.866896759Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:18:19.867452 containerd[1597]: time="2026-01-26T18:18:19.867394241Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 745.398323ms" Jan 26 18:18:19.867504 containerd[1597]: time="2026-01-26T18:18:19.867452560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 26 18:18:19.870635 containerd[1597]: time="2026-01-26T18:18:19.870567799Z" level=info msg="CreateContainer within sandbox \"a908b8c501289bbc3e14f7a897292cc6d927a62a1cb47d3ff3c9105cd0143241\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 26 18:18:19.881544 containerd[1597]: time="2026-01-26T18:18:19.881451936Z" level=info msg="Container b687209c9b74cc7eb23158a739caeec52cc9480120b40598d5e933d01ca92eb7: CDI devices from CRI Config.CDIDevices: []" Jan 26 18:18:19.889723 kubelet[2826]: E0126 18:18:19.889683 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:18:19.894470 containerd[1597]: time="2026-01-26T18:18:19.894340985Z" level=info msg="CreateContainer within sandbox \"a908b8c501289bbc3e14f7a897292cc6d927a62a1cb47d3ff3c9105cd0143241\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b687209c9b74cc7eb23158a739caeec52cc9480120b40598d5e933d01ca92eb7\"" Jan 26 18:18:19.896513 containerd[1597]: time="2026-01-26T18:18:19.896276856Z" level=info msg="StartContainer for \"b687209c9b74cc7eb23158a739caeec52cc9480120b40598d5e933d01ca92eb7\"" Jan 26 18:18:19.898142 containerd[1597]: time="2026-01-26T18:18:19.897929470Z" level=info msg="connecting to shim b687209c9b74cc7eb23158a739caeec52cc9480120b40598d5e933d01ca92eb7" address="unix:///run/containerd/s/633d715e13537ad55142537de9e680e446935053766c87ac055f23dbcef5deb1" protocol=ttrpc version=3 Jan 26 18:18:19.917586 kubelet[2826]: I0126 18:18:19.917381 2826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8f7f497f-86mm5" podStartSLOduration=2.339699532 podStartE2EDuration="7.917361865s" podCreationTimestamp="2026-01-26 18:18:12 +0000 UTC" firstStartedPulling="2026-01-26 18:18:13.543947208 +0000 UTC m=+20.474807910" lastFinishedPulling="2026-01-26 18:18:19.121609541 +0000 UTC m=+26.052470243" observedRunningTime="2026-01-26 18:18:19.916621743 +0000 UTC m=+26.847482446" watchObservedRunningTime="2026-01-26 18:18:19.917361865 +0000 UTC m=+26.848222567" Jan 26 18:18:19.971200 systemd[1]: Started cri-containerd-b687209c9b74cc7eb23158a739caeec52cc9480120b40598d5e933d01ca92eb7.scope - libcontainer container b687209c9b74cc7eb23158a739caeec52cc9480120b40598d5e933d01ca92eb7. Jan 26 18:18:19.974559 kubelet[2826]: E0126 18:18:19.974474 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:19.975094 kubelet[2826]: W0126 18:18:19.975021 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:19.976874 kubelet[2826]: E0126 18:18:19.975191 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:19.978526 kubelet[2826]: E0126 18:18:19.978317 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:19.978769 kubelet[2826]: W0126 18:18:19.978610 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:19.978769 kubelet[2826]: E0126 18:18:19.978764 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:19.979634 kubelet[2826]: E0126 18:18:19.979566 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:19.979889 kubelet[2826]: W0126 18:18:19.979717 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:19.980874 kubelet[2826]: E0126 18:18:19.979969 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:19.982398 kubelet[2826]: E0126 18:18:19.982330 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:19.983522 kubelet[2826]: W0126 18:18:19.983481 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:19.983760 kubelet[2826]: E0126 18:18:19.983651 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:19.984712 kubelet[2826]: E0126 18:18:19.984609 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:19.984951 kubelet[2826]: W0126 18:18:19.984647 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:19.985238 kubelet[2826]: E0126 18:18:19.984943 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:19.986200 kubelet[2826]: E0126 18:18:19.986061 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:19.988183 kubelet[2826]: W0126 18:18:19.987907 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:19.988183 kubelet[2826]: E0126 18:18:19.988054 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:19.988894 kubelet[2826]: E0126 18:18:19.988566 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:19.988894 kubelet[2826]: W0126 18:18:19.988579 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:19.988894 kubelet[2826]: E0126 18:18:19.988589 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:19.990140 kubelet[2826]: E0126 18:18:19.990102 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:19.990140 kubelet[2826]: W0126 18:18:19.990113 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:19.990140 kubelet[2826]: E0126 18:18:19.990124 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:19.990680 kubelet[2826]: E0126 18:18:19.990622 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:19.990680 kubelet[2826]: W0126 18:18:19.990635 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:19.990680 kubelet[2826]: E0126 18:18:19.990644 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:19.991893 kubelet[2826]: E0126 18:18:19.991329 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:19.991893 kubelet[2826]: W0126 18:18:19.991341 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:19.991893 kubelet[2826]: E0126 18:18:19.991350 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:19.992440 kubelet[2826]: E0126 18:18:19.992376 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:19.992440 kubelet[2826]: W0126 18:18:19.992388 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:19.992440 kubelet[2826]: E0126 18:18:19.992398 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:19.993906 kubelet[2826]: E0126 18:18:19.993892 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:19.993972 kubelet[2826]: W0126 18:18:19.993961 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:19.994030 kubelet[2826]: E0126 18:18:19.994019 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:19.996317 kubelet[2826]: E0126 18:18:19.994730 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:19.996317 kubelet[2826]: W0126 18:18:19.994742 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:19.996317 kubelet[2826]: E0126 18:18:19.994751 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:19.997104 kubelet[2826]: E0126 18:18:19.997058 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:19.997375 kubelet[2826]: W0126 18:18:19.997236 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:19.997375 kubelet[2826]: E0126 18:18:19.997248 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:19.998215 kubelet[2826]: E0126 18:18:19.998203 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:19.998505 kubelet[2826]: W0126 18:18:19.998354 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:19.998505 kubelet[2826]: E0126 18:18:19.998366 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:20.000008 kubelet[2826]: E0126 18:18:19.999990 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:20.000008 kubelet[2826]: W0126 18:18:20.000147 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:20.000008 kubelet[2826]: E0126 18:18:20.000164 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:20.001226 kubelet[2826]: E0126 18:18:20.001211 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:20.001314 kubelet[2826]: W0126 18:18:20.001299 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:20.001598 kubelet[2826]: E0126 18:18:20.001582 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:20.002395 kubelet[2826]: E0126 18:18:20.002285 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:20.002395 kubelet[2826]: W0126 18:18:20.002297 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:20.002395 kubelet[2826]: E0126 18:18:20.002343 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:20.003085 kubelet[2826]: E0126 18:18:20.002986 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:20.003085 kubelet[2826]: W0126 18:18:20.002998 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:20.003085 kubelet[2826]: E0126 18:18:20.003042 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:20.003621 kubelet[2826]: E0126 18:18:20.003607 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:20.003674 kubelet[2826]: W0126 18:18:20.003664 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:20.004892 kubelet[2826]: E0126 18:18:20.004649 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:20.005301 kubelet[2826]: E0126 18:18:20.005155 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:20.005301 kubelet[2826]: W0126 18:18:20.005166 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:20.005301 kubelet[2826]: E0126 18:18:20.005242 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:20.005572 kubelet[2826]: E0126 18:18:20.005560 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:20.005625 kubelet[2826]: W0126 18:18:20.005616 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:20.006101 kubelet[2826]: E0126 18:18:20.006088 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:20.006490 kubelet[2826]: E0126 18:18:20.006462 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:20.006490 kubelet[2826]: W0126 18:18:20.006476 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:20.006899 kubelet[2826]: E0126 18:18:20.006695 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:20.007658 kubelet[2826]: E0126 18:18:20.007645 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:20.007950 kubelet[2826]: W0126 18:18:20.007711 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:20.007950 kubelet[2826]: E0126 18:18:20.007745 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:20.008325 kubelet[2826]: E0126 18:18:20.008312 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:20.008478 kubelet[2826]: W0126 18:18:20.008465 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:20.008638 kubelet[2826]: E0126 18:18:20.008593 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:20.009464 kubelet[2826]: E0126 18:18:20.009320 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:20.009464 kubelet[2826]: W0126 18:18:20.009378 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:20.009606 kubelet[2826]: E0126 18:18:20.009589 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:20.010613 kubelet[2826]: E0126 18:18:20.010553 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:20.010613 kubelet[2826]: W0126 18:18:20.010601 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:20.010762 kubelet[2826]: E0126 18:18:20.010746 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:20.011661 kubelet[2826]: E0126 18:18:20.011625 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:20.011707 kubelet[2826]: W0126 18:18:20.011661 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:20.011739 kubelet[2826]: E0126 18:18:20.011732 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:20.013387 kubelet[2826]: E0126 18:18:20.013337 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:20.013387 kubelet[2826]: W0126 18:18:20.013375 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:20.013744 kubelet[2826]: E0126 18:18:20.013601 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:20.014620 kubelet[2826]: E0126 18:18:20.014602 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:20.014620 kubelet[2826]: W0126 18:18:20.014616 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:20.014691 kubelet[2826]: E0126 18:18:20.014628 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:20.016420 kubelet[2826]: E0126 18:18:20.016355 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:20.016553 kubelet[2826]: W0126 18:18:20.016499 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:20.014000 audit[3512]: NETFILTER_CFG table=filter:117 family=2 entries=21 op=nft_register_rule pid=3512 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:18:20.014000 audit[3512]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc07853ea0 a2=0 a3=7ffc07853e8c items=0 ppid=2938 pid=3512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:20.017042 kubelet[2826]: E0126 18:18:20.016650 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:20.014000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:18:20.018441 kubelet[2826]: E0126 18:18:20.018377 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:20.018558 kubelet[2826]: W0126 18:18:20.018522 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:20.018687 kubelet[2826]: E0126 18:18:20.018651 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:20.019497 kubelet[2826]: E0126 18:18:20.019464 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 26 18:18:20.019579 kubelet[2826]: W0126 18:18:20.019497 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 26 18:18:20.019579 kubelet[2826]: E0126 18:18:20.019507 2826 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 26 18:18:20.021000 audit[3512]: NETFILTER_CFG table=nat:118 family=2 entries=19 op=nft_register_chain pid=3512 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:18:20.021000 audit[3512]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc07853ea0 a2=0 a3=7ffc07853e8c items=0 ppid=2938 pid=3512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:20.021000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:18:20.086000 audit: BPF prog-id=166 op=LOAD Jan 26 18:18:20.086000 audit[3461]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3292 pid=3461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:20.086000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236383732303963396237346363376562323331353861373339636165 Jan 26 18:18:20.086000 audit: BPF prog-id=167 op=LOAD Jan 26 18:18:20.086000 audit[3461]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3292 pid=3461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:20.086000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236383732303963396237346363376562323331353861373339636165 Jan 26 18:18:20.086000 audit: BPF prog-id=167 op=UNLOAD Jan 26 18:18:20.086000 audit[3461]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3292 pid=3461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:20.086000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236383732303963396237346363376562323331353861373339636165 Jan 26 18:18:20.086000 audit: BPF prog-id=166 op=UNLOAD Jan 26 18:18:20.086000 audit[3461]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3292 pid=3461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:20.086000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236383732303963396237346363376562323331353861373339636165 Jan 26 18:18:20.086000 audit: BPF prog-id=168 op=LOAD Jan 26 18:18:20.086000 audit[3461]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=3292 pid=3461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:20.086000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236383732303963396237346363376562323331353861373339636165 Jan 26 18:18:20.130545 containerd[1597]: time="2026-01-26T18:18:20.130386148Z" level=info msg="StartContainer for \"b687209c9b74cc7eb23158a739caeec52cc9480120b40598d5e933d01ca92eb7\" returns successfully" Jan 26 18:18:20.148324 systemd[1]: cri-containerd-b687209c9b74cc7eb23158a739caeec52cc9480120b40598d5e933d01ca92eb7.scope: Deactivated successfully. Jan 26 18:18:20.154317 containerd[1597]: time="2026-01-26T18:18:20.154247727Z" level=info msg="received container exit event container_id:\"b687209c9b74cc7eb23158a739caeec52cc9480120b40598d5e933d01ca92eb7\" id:\"b687209c9b74cc7eb23158a739caeec52cc9480120b40598d5e933d01ca92eb7\" pid:3493 exited_at:{seconds:1769451500 nanos:153498338}" Jan 26 18:18:20.155000 audit: BPF prog-id=168 op=UNLOAD Jan 26 18:18:20.210117 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b687209c9b74cc7eb23158a739caeec52cc9480120b40598d5e933d01ca92eb7-rootfs.mount: Deactivated successfully. Jan 26 18:18:20.895713 kubelet[2826]: E0126 18:18:20.895662 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:18:20.897384 kubelet[2826]: E0126 18:18:20.896096 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:18:20.897438 containerd[1597]: time="2026-01-26T18:18:20.897326074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 26 18:18:21.230399 kubelet[2826]: E0126 18:18:21.230081 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x4jch" podUID="6589570b-6489-4043-b23c-e5a49733eb4e" Jan 26 18:18:21.898319 kubelet[2826]: E0126 18:18:21.898145 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:18:23.133573 containerd[1597]: time="2026-01-26T18:18:23.133465463Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:18:23.135030 containerd[1597]: time="2026-01-26T18:18:23.134773340Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 26 18:18:23.136401 containerd[1597]: time="2026-01-26T18:18:23.136368308Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:18:23.139429 containerd[1597]: time="2026-01-26T18:18:23.139347760Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:18:23.140377 containerd[1597]: time="2026-01-26T18:18:23.140293781Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.242927023s" Jan 26 18:18:23.140377 containerd[1597]: time="2026-01-26T18:18:23.140372238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 26 18:18:23.144715 containerd[1597]: time="2026-01-26T18:18:23.144528093Z" level=info msg="CreateContainer within sandbox \"a908b8c501289bbc3e14f7a897292cc6d927a62a1cb47d3ff3c9105cd0143241\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 26 18:18:23.158599 containerd[1597]: time="2026-01-26T18:18:23.158504713Z" level=info msg="Container a974349c49f02a2499fbe0d54f03c16a49e749d5561df2d0e3fd0042f3061ca1: CDI devices from CRI Config.CDIDevices: []" Jan 26 18:18:23.169687 containerd[1597]: time="2026-01-26T18:18:23.169589491Z" level=info msg="CreateContainer within sandbox \"a908b8c501289bbc3e14f7a897292cc6d927a62a1cb47d3ff3c9105cd0143241\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a974349c49f02a2499fbe0d54f03c16a49e749d5561df2d0e3fd0042f3061ca1\"" Jan 26 18:18:23.170484 containerd[1597]: time="2026-01-26T18:18:23.170380436Z" level=info msg="StartContainer for \"a974349c49f02a2499fbe0d54f03c16a49e749d5561df2d0e3fd0042f3061ca1\"" Jan 26 18:18:23.172451 containerd[1597]: time="2026-01-26T18:18:23.172415293Z" level=info msg="connecting to shim a974349c49f02a2499fbe0d54f03c16a49e749d5561df2d0e3fd0042f3061ca1" address="unix:///run/containerd/s/633d715e13537ad55142537de9e680e446935053766c87ac055f23dbcef5deb1" protocol=ttrpc version=3 Jan 26 18:18:23.226042 systemd[1]: Started cri-containerd-a974349c49f02a2499fbe0d54f03c16a49e749d5561df2d0e3fd0042f3061ca1.scope - libcontainer container a974349c49f02a2499fbe0d54f03c16a49e749d5561df2d0e3fd0042f3061ca1. Jan 26 18:18:23.231094 kubelet[2826]: E0126 18:18:23.231059 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x4jch" podUID="6589570b-6489-4043-b23c-e5a49733eb4e" Jan 26 18:18:23.326000 audit: BPF prog-id=169 op=LOAD Jan 26 18:18:23.326000 audit[3562]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3292 pid=3562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:23.326000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139373433343963343966303261323439396662653064353466303363 Jan 26 18:18:23.326000 audit: BPF prog-id=170 op=LOAD Jan 26 18:18:23.326000 audit[3562]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3292 pid=3562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:23.326000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139373433343963343966303261323439396662653064353466303363 Jan 26 18:18:23.326000 audit: BPF prog-id=170 op=UNLOAD Jan 26 18:18:23.326000 audit[3562]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3292 pid=3562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:23.326000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139373433343963343966303261323439396662653064353466303363 Jan 26 18:18:23.326000 audit: BPF prog-id=169 op=UNLOAD Jan 26 18:18:23.326000 audit[3562]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3292 pid=3562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:23.326000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139373433343963343966303261323439396662653064353466303363 Jan 26 18:18:23.326000 audit: BPF prog-id=171 op=LOAD Jan 26 18:18:23.326000 audit[3562]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3292 pid=3562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:23.326000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139373433343963343966303261323439396662653064353466303363 Jan 26 18:18:23.380493 containerd[1597]: time="2026-01-26T18:18:23.380450256Z" level=info msg="StartContainer for \"a974349c49f02a2499fbe0d54f03c16a49e749d5561df2d0e3fd0042f3061ca1\" returns successfully" Jan 26 18:18:23.910885 kubelet[2826]: E0126 18:18:23.910708 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:18:24.535315 systemd[1]: cri-containerd-a974349c49f02a2499fbe0d54f03c16a49e749d5561df2d0e3fd0042f3061ca1.scope: Deactivated successfully. Jan 26 18:18:24.537281 containerd[1597]: time="2026-01-26T18:18:24.537090949Z" level=info msg="received container exit event container_id:\"a974349c49f02a2499fbe0d54f03c16a49e749d5561df2d0e3fd0042f3061ca1\" id:\"a974349c49f02a2499fbe0d54f03c16a49e749d5561df2d0e3fd0042f3061ca1\" pid:3574 exited_at:{seconds:1769451504 nanos:536222631}" Jan 26 18:18:24.538124 systemd[1]: cri-containerd-a974349c49f02a2499fbe0d54f03c16a49e749d5561df2d0e3fd0042f3061ca1.scope: Consumed 1.291s CPU time, 170.9M memory peak, 2.5M read from disk, 171.3M written to disk. Jan 26 18:18:24.541000 audit: BPF prog-id=171 op=UNLOAD Jan 26 18:18:24.547254 kernel: kauditd_printk_skb: 49 callbacks suppressed Jan 26 18:18:24.547331 kernel: audit: type=1334 audit(1769451504.541:567): prog-id=171 op=UNLOAD Jan 26 18:18:24.578098 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a974349c49f02a2499fbe0d54f03c16a49e749d5561df2d0e3fd0042f3061ca1-rootfs.mount: Deactivated successfully. Jan 26 18:18:24.609575 kubelet[2826]: I0126 18:18:24.609213 2826 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 26 18:18:24.674545 systemd[1]: Created slice kubepods-burstable-pod85693ca0_4fb9_455c_bc62_47950c5b8df8.slice - libcontainer container kubepods-burstable-pod85693ca0_4fb9_455c_bc62_47950c5b8df8.slice. Jan 26 18:18:24.688019 systemd[1]: Created slice kubepods-burstable-pod0f7dc91b_7e8b_44a4_acf9_cafa269e417b.slice - libcontainer container kubepods-burstable-pod0f7dc91b_7e8b_44a4_acf9_cafa269e417b.slice. Jan 26 18:18:24.697578 systemd[1]: Created slice kubepods-besteffort-podabefa45c_cb84_41f4_b47b_49099e1244be.slice - libcontainer container kubepods-besteffort-podabefa45c_cb84_41f4_b47b_49099e1244be.slice. Jan 26 18:18:24.707558 systemd[1]: Created slice kubepods-besteffort-pod6c6db6ff_501e_4b31_91b6_e1eb34423484.slice - libcontainer container kubepods-besteffort-pod6c6db6ff_501e_4b31_91b6_e1eb34423484.slice. Jan 26 18:18:24.716277 systemd[1]: Created slice kubepods-besteffort-pode2e25c6b_345b_4146_b644_2efa5b4232f3.slice - libcontainer container kubepods-besteffort-pode2e25c6b_345b_4146_b644_2efa5b4232f3.slice. Jan 26 18:18:24.726263 systemd[1]: Created slice kubepods-besteffort-podb97323b3_2319_4d70_9a63_c4ff89b68e41.slice - libcontainer container kubepods-besteffort-podb97323b3_2319_4d70_9a63_c4ff89b68e41.slice. Jan 26 18:18:24.735641 systemd[1]: Created slice kubepods-besteffort-pod2acaefc0_3f5b_4b21_902d_9d452b4924d3.slice - libcontainer container kubepods-besteffort-pod2acaefc0_3f5b_4b21_902d_9d452b4924d3.slice. Jan 26 18:18:24.852976 kubelet[2826]: I0126 18:18:24.852710 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqrqw\" (UniqueName: \"kubernetes.io/projected/85693ca0-4fb9-455c-bc62-47950c5b8df8-kube-api-access-pqrqw\") pod \"coredns-668d6bf9bc-6586m\" (UID: \"85693ca0-4fb9-455c-bc62-47950c5b8df8\") " pod="kube-system/coredns-668d6bf9bc-6586m" Jan 26 18:18:24.853135 kubelet[2826]: I0126 18:18:24.853105 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2acaefc0-3f5b-4b21-902d-9d452b4924d3-calico-apiserver-certs\") pod \"calico-apiserver-6dc747cc9d-4d52w\" (UID: \"2acaefc0-3f5b-4b21-902d-9d452b4924d3\") " pod="calico-apiserver/calico-apiserver-6dc747cc9d-4d52w" Jan 26 18:18:24.853263 kubelet[2826]: I0126 18:18:24.853146 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f7dc91b-7e8b-44a4-acf9-cafa269e417b-config-volume\") pod \"coredns-668d6bf9bc-xxkhp\" (UID: \"0f7dc91b-7e8b-44a4-acf9-cafa269e417b\") " pod="kube-system/coredns-668d6bf9bc-xxkhp" Jan 26 18:18:24.853263 kubelet[2826]: I0126 18:18:24.853171 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8qnp\" (UniqueName: \"kubernetes.io/projected/0f7dc91b-7e8b-44a4-acf9-cafa269e417b-kube-api-access-n8qnp\") pod \"coredns-668d6bf9bc-xxkhp\" (UID: \"0f7dc91b-7e8b-44a4-acf9-cafa269e417b\") " pod="kube-system/coredns-668d6bf9bc-xxkhp" Jan 26 18:18:24.853423 kubelet[2826]: I0126 18:18:24.853268 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df5rz\" (UniqueName: \"kubernetes.io/projected/6c6db6ff-501e-4b31-91b6-e1eb34423484-kube-api-access-df5rz\") pod \"calico-apiserver-6dc747cc9d-68rtx\" (UID: \"6c6db6ff-501e-4b31-91b6-e1eb34423484\") " pod="calico-apiserver/calico-apiserver-6dc747cc9d-68rtx" Jan 26 18:18:24.853423 kubelet[2826]: I0126 18:18:24.853297 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b97323b3-2319-4d70-9a63-c4ff89b68e41-whisker-backend-key-pair\") pod \"whisker-756c98c54f-cps8m\" (UID: \"b97323b3-2319-4d70-9a63-c4ff89b68e41\") " pod="calico-system/whisker-756c98c54f-cps8m" Jan 26 18:18:24.853423 kubelet[2826]: I0126 18:18:24.853326 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2e25c6b-345b-4146-b644-2efa5b4232f3-config\") pod \"goldmane-666569f655-hc8sx\" (UID: \"e2e25c6b-345b-4146-b644-2efa5b4232f3\") " pod="calico-system/goldmane-666569f655-hc8sx" Jan 26 18:18:24.853423 kubelet[2826]: I0126 18:18:24.853348 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp5p7\" (UniqueName: \"kubernetes.io/projected/b97323b3-2319-4d70-9a63-c4ff89b68e41-kube-api-access-tp5p7\") pod \"whisker-756c98c54f-cps8m\" (UID: \"b97323b3-2319-4d70-9a63-c4ff89b68e41\") " pod="calico-system/whisker-756c98c54f-cps8m" Jan 26 18:18:24.853423 kubelet[2826]: I0126 18:18:24.853371 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv6jh\" (UniqueName: \"kubernetes.io/projected/2acaefc0-3f5b-4b21-902d-9d452b4924d3-kube-api-access-xv6jh\") pod \"calico-apiserver-6dc747cc9d-4d52w\" (UID: \"2acaefc0-3f5b-4b21-902d-9d452b4924d3\") " pod="calico-apiserver/calico-apiserver-6dc747cc9d-4d52w" Jan 26 18:18:24.853948 kubelet[2826]: I0126 18:18:24.853393 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e2e25c6b-345b-4146-b644-2efa5b4232f3-goldmane-key-pair\") pod \"goldmane-666569f655-hc8sx\" (UID: \"e2e25c6b-345b-4146-b644-2efa5b4232f3\") " pod="calico-system/goldmane-666569f655-hc8sx" Jan 26 18:18:24.853948 kubelet[2826]: I0126 18:18:24.853416 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85693ca0-4fb9-455c-bc62-47950c5b8df8-config-volume\") pod \"coredns-668d6bf9bc-6586m\" (UID: \"85693ca0-4fb9-455c-bc62-47950c5b8df8\") " pod="kube-system/coredns-668d6bf9bc-6586m" Jan 26 18:18:24.853948 kubelet[2826]: I0126 18:18:24.853438 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2kbk\" (UniqueName: \"kubernetes.io/projected/e2e25c6b-345b-4146-b644-2efa5b4232f3-kube-api-access-d2kbk\") pod \"goldmane-666569f655-hc8sx\" (UID: \"e2e25c6b-345b-4146-b644-2efa5b4232f3\") " pod="calico-system/goldmane-666569f655-hc8sx" Jan 26 18:18:24.853948 kubelet[2826]: I0126 18:18:24.853459 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5cx5\" (UniqueName: \"kubernetes.io/projected/abefa45c-cb84-41f4-b47b-49099e1244be-kube-api-access-j5cx5\") pod \"calico-kube-controllers-6df84d7cd-llmh4\" (UID: \"abefa45c-cb84-41f4-b47b-49099e1244be\") " pod="calico-system/calico-kube-controllers-6df84d7cd-llmh4" Jan 26 18:18:24.853948 kubelet[2826]: I0126 18:18:24.853481 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6c6db6ff-501e-4b31-91b6-e1eb34423484-calico-apiserver-certs\") pod \"calico-apiserver-6dc747cc9d-68rtx\" (UID: \"6c6db6ff-501e-4b31-91b6-e1eb34423484\") " pod="calico-apiserver/calico-apiserver-6dc747cc9d-68rtx" Jan 26 18:18:24.854124 kubelet[2826]: I0126 18:18:24.853642 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abefa45c-cb84-41f4-b47b-49099e1244be-tigera-ca-bundle\") pod \"calico-kube-controllers-6df84d7cd-llmh4\" (UID: \"abefa45c-cb84-41f4-b47b-49099e1244be\") " pod="calico-system/calico-kube-controllers-6df84d7cd-llmh4" Jan 26 18:18:24.855645 kubelet[2826]: I0126 18:18:24.855538 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b97323b3-2319-4d70-9a63-c4ff89b68e41-whisker-ca-bundle\") pod \"whisker-756c98c54f-cps8m\" (UID: \"b97323b3-2319-4d70-9a63-c4ff89b68e41\") " pod="calico-system/whisker-756c98c54f-cps8m" Jan 26 18:18:24.855645 kubelet[2826]: I0126 18:18:24.855575 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2e25c6b-345b-4146-b644-2efa5b4232f3-goldmane-ca-bundle\") pod \"goldmane-666569f655-hc8sx\" (UID: \"e2e25c6b-345b-4146-b644-2efa5b4232f3\") " pod="calico-system/goldmane-666569f655-hc8sx" Jan 26 18:18:24.917163 kubelet[2826]: E0126 18:18:24.916925 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:18:24.918424 containerd[1597]: time="2026-01-26T18:18:24.918180452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 26 18:18:24.989773 kubelet[2826]: E0126 18:18:24.989738 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:18:24.995458 containerd[1597]: time="2026-01-26T18:18:24.995251744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6586m,Uid:85693ca0-4fb9-455c-bc62-47950c5b8df8,Namespace:kube-system,Attempt:0,}" Jan 26 18:18:25.011774 containerd[1597]: time="2026-01-26T18:18:25.011721409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dc747cc9d-68rtx,Uid:6c6db6ff-501e-4b31-91b6-e1eb34423484,Namespace:calico-apiserver,Attempt:0,}" Jan 26 18:18:25.021998 containerd[1597]: time="2026-01-26T18:18:25.021750553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-hc8sx,Uid:e2e25c6b-345b-4146-b644-2efa5b4232f3,Namespace:calico-system,Attempt:0,}" Jan 26 18:18:25.044648 containerd[1597]: time="2026-01-26T18:18:25.044373480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dc747cc9d-4d52w,Uid:2acaefc0-3f5b-4b21-902d-9d452b4924d3,Namespace:calico-apiserver,Attempt:0,}" Jan 26 18:18:25.044648 containerd[1597]: time="2026-01-26T18:18:25.044496840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-756c98c54f-cps8m,Uid:b97323b3-2319-4d70-9a63-c4ff89b68e41,Namespace:calico-system,Attempt:0,}" Jan 26 18:18:25.240777 systemd[1]: Created slice kubepods-besteffort-pod6589570b_6489_4043_b23c_e5a49733eb4e.slice - libcontainer container kubepods-besteffort-pod6589570b_6489_4043_b23c_e5a49733eb4e.slice. Jan 26 18:18:25.251682 containerd[1597]: time="2026-01-26T18:18:25.250984444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x4jch,Uid:6589570b-6489-4043-b23c-e5a49733eb4e,Namespace:calico-system,Attempt:0,}" Jan 26 18:18:25.291768 kubelet[2826]: E0126 18:18:25.291682 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:18:25.294742 containerd[1597]: time="2026-01-26T18:18:25.294639106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xxkhp,Uid:0f7dc91b-7e8b-44a4-acf9-cafa269e417b,Namespace:kube-system,Attempt:0,}" Jan 26 18:18:25.304785 containerd[1597]: time="2026-01-26T18:18:25.304694630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6df84d7cd-llmh4,Uid:abefa45c-cb84-41f4-b47b-49099e1244be,Namespace:calico-system,Attempt:0,}" Jan 26 18:18:25.317532 containerd[1597]: time="2026-01-26T18:18:25.316217663Z" level=error msg="Failed to destroy network for sandbox \"e9ba463f34cc041598cfcd5e08961c3793f55bcffad4494f6f879131257fb4f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 26 18:18:25.332279 containerd[1597]: time="2026-01-26T18:18:25.332191289Z" level=error msg="Failed to destroy network for sandbox \"d5cebfaeb2d7e5cde9189a32b672d0db074246861f940fa81c2937816c3c8b07\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 26 18:18:25.349747 containerd[1597]: time="2026-01-26T18:18:25.349690564Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6586m,Uid:85693ca0-4fb9-455c-bc62-47950c5b8df8,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9ba463f34cc041598cfcd5e08961c3793f55bcffad4494f6f879131257fb4f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 26 18:18:25.352587 kubelet[2826]: E0126 18:18:25.351762 2826 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9ba463f34cc041598cfcd5e08961c3793f55bcffad4494f6f879131257fb4f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 26 18:18:25.352587 kubelet[2826]: E0126 18:18:25.351913 2826 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9ba463f34cc041598cfcd5e08961c3793f55bcffad4494f6f879131257fb4f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6586m" Jan 26 18:18:25.352587 kubelet[2826]: E0126 18:18:25.351977 2826 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9ba463f34cc041598cfcd5e08961c3793f55bcffad4494f6f879131257fb4f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6586m" Jan 26 18:18:25.352782 kubelet[2826]: E0126 18:18:25.352196 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-6586m_kube-system(85693ca0-4fb9-455c-bc62-47950c5b8df8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-6586m_kube-system(85693ca0-4fb9-455c-bc62-47950c5b8df8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e9ba463f34cc041598cfcd5e08961c3793f55bcffad4494f6f879131257fb4f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6586m" podUID="85693ca0-4fb9-455c-bc62-47950c5b8df8" Jan 26 18:18:25.355594 containerd[1597]: time="2026-01-26T18:18:25.355112281Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-hc8sx,Uid:e2e25c6b-345b-4146-b644-2efa5b4232f3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5cebfaeb2d7e5cde9189a32b672d0db074246861f940fa81c2937816c3c8b07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 26 18:18:25.356678 kubelet[2826]: E0126 18:18:25.356485 2826 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5cebfaeb2d7e5cde9189a32b672d0db074246861f940fa81c2937816c3c8b07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 26 18:18:25.356794 kubelet[2826]: E0126 18:18:25.356684 2826 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5cebfaeb2d7e5cde9189a32b672d0db074246861f940fa81c2937816c3c8b07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-hc8sx" Jan 26 18:18:25.356794 kubelet[2826]: E0126 18:18:25.356721 2826 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5cebfaeb2d7e5cde9189a32b672d0db074246861f940fa81c2937816c3c8b07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-hc8sx" Jan 26 18:18:25.357257 kubelet[2826]: E0126 18:18:25.356777 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-hc8sx_calico-system(e2e25c6b-345b-4146-b644-2efa5b4232f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-hc8sx_calico-system(e2e25c6b-345b-4146-b644-2efa5b4232f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d5cebfaeb2d7e5cde9189a32b672d0db074246861f940fa81c2937816c3c8b07\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-hc8sx" podUID="e2e25c6b-345b-4146-b644-2efa5b4232f3" Jan 26 18:18:25.401118 containerd[1597]: time="2026-01-26T18:18:25.400971778Z" level=error msg="Failed to destroy network for sandbox \"f607216aa843429e015003c7e326f0fa2a190cf7182314d7e2cde1aa81830810\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 26 18:18:25.411117 containerd[1597]: time="2026-01-26T18:18:25.410979903Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-756c98c54f-cps8m,Uid:b97323b3-2319-4d70-9a63-c4ff89b68e41,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f607216aa843429e015003c7e326f0fa2a190cf7182314d7e2cde1aa81830810\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 26 18:18:25.412436 kubelet[2826]: E0126 18:18:25.412274 2826 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f607216aa843429e015003c7e326f0fa2a190cf7182314d7e2cde1aa81830810\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 26 18:18:25.412436 kubelet[2826]: E0126 18:18:25.412351 2826 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f607216aa843429e015003c7e326f0fa2a190cf7182314d7e2cde1aa81830810\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-756c98c54f-cps8m" Jan 26 18:18:25.412436 kubelet[2826]: E0126 18:18:25.412385 2826 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f607216aa843429e015003c7e326f0fa2a190cf7182314d7e2cde1aa81830810\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-756c98c54f-cps8m" Jan 26 18:18:25.412639 kubelet[2826]: E0126 18:18:25.412431 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-756c98c54f-cps8m_calico-system(b97323b3-2319-4d70-9a63-c4ff89b68e41)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-756c98c54f-cps8m_calico-system(b97323b3-2319-4d70-9a63-c4ff89b68e41)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f607216aa843429e015003c7e326f0fa2a190cf7182314d7e2cde1aa81830810\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-756c98c54f-cps8m" podUID="b97323b3-2319-4d70-9a63-c4ff89b68e41" Jan 26 18:18:25.428775 containerd[1597]: time="2026-01-26T18:18:25.428365251Z" level=error msg="Failed to destroy network for sandbox \"97488bab02dbc9c5a772c8aafd4b2166d899e3987717b86adc56b5eeb43a88d6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 26 18:18:25.436792 containerd[1597]: time="2026-01-26T18:18:25.436651808Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dc747cc9d-4d52w,Uid:2acaefc0-3f5b-4b21-902d-9d452b4924d3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"97488bab02dbc9c5a772c8aafd4b2166d899e3987717b86adc56b5eeb43a88d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 26 18:18:25.438303 kubelet[2826]: E0126 18:18:25.437243 2826 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97488bab02dbc9c5a772c8aafd4b2166d899e3987717b86adc56b5eeb43a88d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 26 18:18:25.438303 kubelet[2826]: E0126 18:18:25.437302 2826 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97488bab02dbc9c5a772c8aafd4b2166d899e3987717b86adc56b5eeb43a88d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dc747cc9d-4d52w" Jan 26 18:18:25.438303 kubelet[2826]: E0126 18:18:25.437328 2826 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97488bab02dbc9c5a772c8aafd4b2166d899e3987717b86adc56b5eeb43a88d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dc747cc9d-4d52w" Jan 26 18:18:25.438592 kubelet[2826]: E0126 18:18:25.437364 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6dc747cc9d-4d52w_calico-apiserver(2acaefc0-3f5b-4b21-902d-9d452b4924d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6dc747cc9d-4d52w_calico-apiserver(2acaefc0-3f5b-4b21-902d-9d452b4924d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"97488bab02dbc9c5a772c8aafd4b2166d899e3987717b86adc56b5eeb43a88d6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dc747cc9d-4d52w" podUID="2acaefc0-3f5b-4b21-902d-9d452b4924d3" Jan 26 18:18:25.466123 containerd[1597]: time="2026-01-26T18:18:25.444071996Z" level=error msg="Failed to destroy network for sandbox \"fa0a227b2dbc0123db41c1857d739fedf9d719d63623b45cfccf521d30db2430\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 26 18:18:25.470178 containerd[1597]: time="2026-01-26T18:18:25.470055476Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dc747cc9d-68rtx,Uid:6c6db6ff-501e-4b31-91b6-e1eb34423484,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa0a227b2dbc0123db41c1857d739fedf9d719d63623b45cfccf521d30db2430\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 26 18:18:25.471118 kubelet[2826]: E0126 18:18:25.471075 2826 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa0a227b2dbc0123db41c1857d739fedf9d719d63623b45cfccf521d30db2430\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 26 18:18:25.471364 kubelet[2826]: E0126 18:18:25.471336 2826 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa0a227b2dbc0123db41c1857d739fedf9d719d63623b45cfccf521d30db2430\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dc747cc9d-68rtx" Jan 26 18:18:25.471878 kubelet[2826]: E0126 18:18:25.471637 2826 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa0a227b2dbc0123db41c1857d739fedf9d719d63623b45cfccf521d30db2430\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dc747cc9d-68rtx" Jan 26 18:18:25.472193 kubelet[2826]: E0126 18:18:25.471800 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6dc747cc9d-68rtx_calico-apiserver(6c6db6ff-501e-4b31-91b6-e1eb34423484)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6dc747cc9d-68rtx_calico-apiserver(6c6db6ff-501e-4b31-91b6-e1eb34423484)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa0a227b2dbc0123db41c1857d739fedf9d719d63623b45cfccf521d30db2430\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dc747cc9d-68rtx" podUID="6c6db6ff-501e-4b31-91b6-e1eb34423484" Jan 26 18:18:25.482993 containerd[1597]: time="2026-01-26T18:18:25.482777272Z" level=error msg="Failed to destroy network for sandbox \"4c8573e6fe838ece80b2839127d8a6558925f1c1e133f5a66277e2138374a060\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 26 18:18:25.486996 containerd[1597]: time="2026-01-26T18:18:25.486928784Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x4jch,Uid:6589570b-6489-4043-b23c-e5a49733eb4e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c8573e6fe838ece80b2839127d8a6558925f1c1e133f5a66277e2138374a060\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 26 18:18:25.487893 kubelet[2826]: E0126 18:18:25.487319 2826 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c8573e6fe838ece80b2839127d8a6558925f1c1e133f5a66277e2138374a060\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 26 18:18:25.487893 kubelet[2826]: E0126 18:18:25.487375 2826 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c8573e6fe838ece80b2839127d8a6558925f1c1e133f5a66277e2138374a060\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x4jch" Jan 26 18:18:25.487893 kubelet[2826]: E0126 18:18:25.487418 2826 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c8573e6fe838ece80b2839127d8a6558925f1c1e133f5a66277e2138374a060\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x4jch" Jan 26 18:18:25.488080 kubelet[2826]: E0126 18:18:25.487463 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-x4jch_calico-system(6589570b-6489-4043-b23c-e5a49733eb4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-x4jch_calico-system(6589570b-6489-4043-b23c-e5a49733eb4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c8573e6fe838ece80b2839127d8a6558925f1c1e133f5a66277e2138374a060\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x4jch" podUID="6589570b-6489-4043-b23c-e5a49733eb4e" Jan 26 18:18:25.510201 containerd[1597]: time="2026-01-26T18:18:25.509784286Z" level=error msg="Failed to destroy network for sandbox \"736a6f1c5d8b743a7ef8e859b15f1c29338fbc6794cc428e767f3e28aa9256c8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 26 18:18:25.515761 containerd[1597]: time="2026-01-26T18:18:25.515575020Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xxkhp,Uid:0f7dc91b-7e8b-44a4-acf9-cafa269e417b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"736a6f1c5d8b743a7ef8e859b15f1c29338fbc6794cc428e767f3e28aa9256c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 26 18:18:25.516127 containerd[1597]: time="2026-01-26T18:18:25.515913777Z" level=error msg="Failed to destroy network for sandbox \"d965b7247798a38dad9e42f6b627dcce92d7db693c97c985cf57ac1dd714c279\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 26 18:18:25.516181 kubelet[2826]: E0126 18:18:25.516030 2826 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"736a6f1c5d8b743a7ef8e859b15f1c29338fbc6794cc428e767f3e28aa9256c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 26 18:18:25.516181 kubelet[2826]: E0126 18:18:25.516090 2826 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"736a6f1c5d8b743a7ef8e859b15f1c29338fbc6794cc428e767f3e28aa9256c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xxkhp" Jan 26 18:18:25.516181 kubelet[2826]: E0126 18:18:25.516110 2826 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"736a6f1c5d8b743a7ef8e859b15f1c29338fbc6794cc428e767f3e28aa9256c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xxkhp" Jan 26 18:18:25.516337 kubelet[2826]: E0126 18:18:25.516192 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-xxkhp_kube-system(0f7dc91b-7e8b-44a4-acf9-cafa269e417b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-xxkhp_kube-system(0f7dc91b-7e8b-44a4-acf9-cafa269e417b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"736a6f1c5d8b743a7ef8e859b15f1c29338fbc6794cc428e767f3e28aa9256c8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-xxkhp" podUID="0f7dc91b-7e8b-44a4-acf9-cafa269e417b" Jan 26 18:18:25.520020 containerd[1597]: time="2026-01-26T18:18:25.519919813Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6df84d7cd-llmh4,Uid:abefa45c-cb84-41f4-b47b-49099e1244be,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d965b7247798a38dad9e42f6b627dcce92d7db693c97c985cf57ac1dd714c279\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 26 18:18:25.520326 kubelet[2826]: E0126 18:18:25.520232 2826 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d965b7247798a38dad9e42f6b627dcce92d7db693c97c985cf57ac1dd714c279\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 26 18:18:25.520326 kubelet[2826]: E0126 18:18:25.520291 2826 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d965b7247798a38dad9e42f6b627dcce92d7db693c97c985cf57ac1dd714c279\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6df84d7cd-llmh4" Jan 26 18:18:25.520326 kubelet[2826]: E0126 18:18:25.520317 2826 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d965b7247798a38dad9e42f6b627dcce92d7db693c97c985cf57ac1dd714c279\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6df84d7cd-llmh4" Jan 26 18:18:25.520762 kubelet[2826]: E0126 18:18:25.520388 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6df84d7cd-llmh4_calico-system(abefa45c-cb84-41f4-b47b-49099e1244be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6df84d7cd-llmh4_calico-system(abefa45c-cb84-41f4-b47b-49099e1244be)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d965b7247798a38dad9e42f6b627dcce92d7db693c97c985cf57ac1dd714c279\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6df84d7cd-llmh4" podUID="abefa45c-cb84-41f4-b47b-49099e1244be" Jan 26 18:18:30.352455 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4259831933.mount: Deactivated successfully. Jan 26 18:18:30.549815 containerd[1597]: time="2026-01-26T18:18:30.549703459Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:18:30.551168 containerd[1597]: time="2026-01-26T18:18:30.551069056Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880555" Jan 26 18:18:30.552910 containerd[1597]: time="2026-01-26T18:18:30.552802451Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:18:30.554937 containerd[1597]: time="2026-01-26T18:18:30.554773549Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 26 18:18:30.555359 containerd[1597]: time="2026-01-26T18:18:30.555294116Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 5.637035769s" Jan 26 18:18:30.555359 containerd[1597]: time="2026-01-26T18:18:30.555346063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 26 18:18:30.593781 containerd[1597]: time="2026-01-26T18:18:30.593644148Z" level=info msg="CreateContainer within sandbox \"a908b8c501289bbc3e14f7a897292cc6d927a62a1cb47d3ff3c9105cd0143241\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 26 18:18:30.613219 containerd[1597]: time="2026-01-26T18:18:30.613064842Z" level=info msg="Container 6af7782834896a4b61663be5c0a5ea6085f333673e6f20fb4867220149cd004e: CDI devices from CRI Config.CDIDevices: []" Jan 26 18:18:30.629258 containerd[1597]: time="2026-01-26T18:18:30.629016627Z" level=info msg="CreateContainer within sandbox \"a908b8c501289bbc3e14f7a897292cc6d927a62a1cb47d3ff3c9105cd0143241\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6af7782834896a4b61663be5c0a5ea6085f333673e6f20fb4867220149cd004e\"" Jan 26 18:18:30.629921 containerd[1597]: time="2026-01-26T18:18:30.629798056Z" level=info msg="StartContainer for \"6af7782834896a4b61663be5c0a5ea6085f333673e6f20fb4867220149cd004e\"" Jan 26 18:18:30.631912 containerd[1597]: time="2026-01-26T18:18:30.631788550Z" level=info msg="connecting to shim 6af7782834896a4b61663be5c0a5ea6085f333673e6f20fb4867220149cd004e" address="unix:///run/containerd/s/633d715e13537ad55142537de9e680e446935053766c87ac055f23dbcef5deb1" protocol=ttrpc version=3 Jan 26 18:18:30.681185 systemd[1]: Started cri-containerd-6af7782834896a4b61663be5c0a5ea6085f333673e6f20fb4867220149cd004e.scope - libcontainer container 6af7782834896a4b61663be5c0a5ea6085f333673e6f20fb4867220149cd004e. Jan 26 18:18:30.786000 audit: BPF prog-id=172 op=LOAD Jan 26 18:18:30.786000 audit[3882]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3292 pid=3882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:30.802506 kernel: audit: type=1334 audit(1769451510.786:568): prog-id=172 op=LOAD Jan 26 18:18:30.802606 kernel: audit: type=1300 audit(1769451510.786:568): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3292 pid=3882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:30.802652 kernel: audit: type=1327 audit(1769451510.786:568): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661663737383238333438393661346236313636336265356330613565 Jan 26 18:18:30.786000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661663737383238333438393661346236313636336265356330613565 Jan 26 18:18:30.786000 audit: BPF prog-id=173 op=LOAD Jan 26 18:18:30.816779 kernel: audit: type=1334 audit(1769451510.786:569): prog-id=173 op=LOAD Jan 26 18:18:30.786000 audit[3882]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3292 pid=3882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:30.832956 kernel: audit: type=1300 audit(1769451510.786:569): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3292 pid=3882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:30.786000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661663737383238333438393661346236313636336265356330613565 Jan 26 18:18:30.786000 audit: BPF prog-id=173 op=UNLOAD Jan 26 18:18:30.847048 kernel: audit: type=1327 audit(1769451510.786:569): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661663737383238333438393661346236313636336265356330613565 Jan 26 18:18:30.847163 kernel: audit: type=1334 audit(1769451510.786:570): prog-id=173 op=UNLOAD Jan 26 18:18:30.847192 kernel: audit: type=1300 audit(1769451510.786:570): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3292 pid=3882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:30.786000 audit[3882]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3292 pid=3882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:30.786000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661663737383238333438393661346236313636336265356330613565 Jan 26 18:18:30.872162 kernel: audit: type=1327 audit(1769451510.786:570): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661663737383238333438393661346236313636336265356330613565 Jan 26 18:18:30.872310 kernel: audit: type=1334 audit(1769451510.786:571): prog-id=172 op=UNLOAD Jan 26 18:18:30.786000 audit: BPF prog-id=172 op=UNLOAD Jan 26 18:18:30.786000 audit[3882]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3292 pid=3882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:30.786000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661663737383238333438393661346236313636336265356330613565 Jan 26 18:18:30.786000 audit: BPF prog-id=174 op=LOAD Jan 26 18:18:30.786000 audit[3882]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3292 pid=3882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:30.786000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661663737383238333438393661346236313636336265356330613565 Jan 26 18:18:30.945115 containerd[1597]: time="2026-01-26T18:18:30.944180075Z" level=info msg="StartContainer for \"6af7782834896a4b61663be5c0a5ea6085f333673e6f20fb4867220149cd004e\" returns successfully" Jan 26 18:18:30.987204 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 26 18:18:30.988271 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 26 18:18:31.340337 kubelet[2826]: I0126 18:18:31.340294 2826 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tp5p7\" (UniqueName: \"kubernetes.io/projected/b97323b3-2319-4d70-9a63-c4ff89b68e41-kube-api-access-tp5p7\") pod \"b97323b3-2319-4d70-9a63-c4ff89b68e41\" (UID: \"b97323b3-2319-4d70-9a63-c4ff89b68e41\") " Jan 26 18:18:31.341543 kubelet[2826]: I0126 18:18:31.341009 2826 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b97323b3-2319-4d70-9a63-c4ff89b68e41-whisker-backend-key-pair\") pod \"b97323b3-2319-4d70-9a63-c4ff89b68e41\" (UID: \"b97323b3-2319-4d70-9a63-c4ff89b68e41\") " Jan 26 18:18:31.341543 kubelet[2826]: I0126 18:18:31.341042 2826 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b97323b3-2319-4d70-9a63-c4ff89b68e41-whisker-ca-bundle\") pod \"b97323b3-2319-4d70-9a63-c4ff89b68e41\" (UID: \"b97323b3-2319-4d70-9a63-c4ff89b68e41\") " Jan 26 18:18:31.343161 kubelet[2826]: I0126 18:18:31.343113 2826 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b97323b3-2319-4d70-9a63-c4ff89b68e41-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "b97323b3-2319-4d70-9a63-c4ff89b68e41" (UID: "b97323b3-2319-4d70-9a63-c4ff89b68e41"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 18:18:31.350901 kubelet[2826]: I0126 18:18:31.350770 2826 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b97323b3-2319-4d70-9a63-c4ff89b68e41-kube-api-access-tp5p7" (OuterVolumeSpecName: "kube-api-access-tp5p7") pod "b97323b3-2319-4d70-9a63-c4ff89b68e41" (UID: "b97323b3-2319-4d70-9a63-c4ff89b68e41"). InnerVolumeSpecName "kube-api-access-tp5p7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 18:18:31.352422 systemd[1]: var-lib-kubelet-pods-b97323b3\x2d2319\x2d4d70\x2d9a63\x2dc4ff89b68e41-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtp5p7.mount: Deactivated successfully. Jan 26 18:18:31.357151 kubelet[2826]: I0126 18:18:31.356996 2826 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b97323b3-2319-4d70-9a63-c4ff89b68e41-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "b97323b3-2319-4d70-9a63-c4ff89b68e41" (UID: "b97323b3-2319-4d70-9a63-c4ff89b68e41"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 18:18:31.358514 systemd[1]: var-lib-kubelet-pods-b97323b3\x2d2319\x2d4d70\x2d9a63\x2dc4ff89b68e41-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 26 18:18:31.442101 kubelet[2826]: I0126 18:18:31.442034 2826 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tp5p7\" (UniqueName: \"kubernetes.io/projected/b97323b3-2319-4d70-9a63-c4ff89b68e41-kube-api-access-tp5p7\") on node \"localhost\" DevicePath \"\"" Jan 26 18:18:31.442101 kubelet[2826]: I0126 18:18:31.442082 2826 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b97323b3-2319-4d70-9a63-c4ff89b68e41-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 26 18:18:31.442101 kubelet[2826]: I0126 18:18:31.442093 2826 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b97323b3-2319-4d70-9a63-c4ff89b68e41-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 26 18:18:31.958151 kubelet[2826]: E0126 18:18:31.958067 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:18:31.964936 systemd[1]: Removed slice kubepods-besteffort-podb97323b3_2319_4d70_9a63_c4ff89b68e41.slice - libcontainer container kubepods-besteffort-podb97323b3_2319_4d70_9a63_c4ff89b68e41.slice. Jan 26 18:18:31.988723 kubelet[2826]: I0126 18:18:31.988630 2826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-k6nmt" podStartSLOduration=2.986650242 podStartE2EDuration="19.988615222s" podCreationTimestamp="2026-01-26 18:18:12 +0000 UTC" firstStartedPulling="2026-01-26 18:18:13.577036113 +0000 UTC m=+20.507896814" lastFinishedPulling="2026-01-26 18:18:30.579001092 +0000 UTC m=+37.509861794" observedRunningTime="2026-01-26 18:18:31.974950071 +0000 UTC m=+38.905810793" watchObservedRunningTime="2026-01-26 18:18:31.988615222 +0000 UTC m=+38.919475924" Jan 26 18:18:32.042544 systemd[1]: Created slice kubepods-besteffort-pod7dfdaf1a_1fc3_4ffb_9232_005b9874f7f8.slice - libcontainer container kubepods-besteffort-pod7dfdaf1a_1fc3_4ffb_9232_005b9874f7f8.slice. Jan 26 18:18:32.047900 kubelet[2826]: I0126 18:18:32.047634 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7dfdaf1a-1fc3-4ffb-9232-005b9874f7f8-whisker-backend-key-pair\") pod \"whisker-bf8465869-rt8sq\" (UID: \"7dfdaf1a-1fc3-4ffb-9232-005b9874f7f8\") " pod="calico-system/whisker-bf8465869-rt8sq" Jan 26 18:18:32.047900 kubelet[2826]: I0126 18:18:32.047672 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgncg\" (UniqueName: \"kubernetes.io/projected/7dfdaf1a-1fc3-4ffb-9232-005b9874f7f8-kube-api-access-bgncg\") pod \"whisker-bf8465869-rt8sq\" (UID: \"7dfdaf1a-1fc3-4ffb-9232-005b9874f7f8\") " pod="calico-system/whisker-bf8465869-rt8sq" Jan 26 18:18:32.047900 kubelet[2826]: I0126 18:18:32.047699 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7dfdaf1a-1fc3-4ffb-9232-005b9874f7f8-whisker-ca-bundle\") pod \"whisker-bf8465869-rt8sq\" (UID: \"7dfdaf1a-1fc3-4ffb-9232-005b9874f7f8\") " pod="calico-system/whisker-bf8465869-rt8sq" Jan 26 18:18:32.347553 containerd[1597]: time="2026-01-26T18:18:32.347470228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bf8465869-rt8sq,Uid:7dfdaf1a-1fc3-4ffb-9232-005b9874f7f8,Namespace:calico-system,Attempt:0,}" Jan 26 18:18:32.657119 systemd-networkd[1518]: calia18df8605e0: Link UP Jan 26 18:18:32.657651 systemd-networkd[1518]: calia18df8605e0: Gained carrier Jan 26 18:18:32.687753 containerd[1597]: 2026-01-26 18:18:32.386 [INFO][3949] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 26 18:18:32.687753 containerd[1597]: 2026-01-26 18:18:32.413 [INFO][3949] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--bf8465869--rt8sq-eth0 whisker-bf8465869- calico-system 7dfdaf1a-1fc3-4ffb-9232-005b9874f7f8 897 0 2026-01-26 18:18:32 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:bf8465869 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-bf8465869-rt8sq eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia18df8605e0 [] [] }} ContainerID="25ddc73dc9073c29c46a804fe3af9a36f5e0fbe443c8089db4f4b582af62c78c" Namespace="calico-system" Pod="whisker-bf8465869-rt8sq" WorkloadEndpoint="localhost-k8s-whisker--bf8465869--rt8sq-" Jan 26 18:18:32.687753 containerd[1597]: 2026-01-26 18:18:32.414 [INFO][3949] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="25ddc73dc9073c29c46a804fe3af9a36f5e0fbe443c8089db4f4b582af62c78c" Namespace="calico-system" Pod="whisker-bf8465869-rt8sq" WorkloadEndpoint="localhost-k8s-whisker--bf8465869--rt8sq-eth0" Jan 26 18:18:32.687753 containerd[1597]: 2026-01-26 18:18:32.573 [INFO][3963] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="25ddc73dc9073c29c46a804fe3af9a36f5e0fbe443c8089db4f4b582af62c78c" HandleID="k8s-pod-network.25ddc73dc9073c29c46a804fe3af9a36f5e0fbe443c8089db4f4b582af62c78c" Workload="localhost-k8s-whisker--bf8465869--rt8sq-eth0" Jan 26 18:18:32.688104 containerd[1597]: 2026-01-26 18:18:32.574 [INFO][3963] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="25ddc73dc9073c29c46a804fe3af9a36f5e0fbe443c8089db4f4b582af62c78c" HandleID="k8s-pod-network.25ddc73dc9073c29c46a804fe3af9a36f5e0fbe443c8089db4f4b582af62c78c" Workload="localhost-k8s-whisker--bf8465869--rt8sq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e430), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-bf8465869-rt8sq", "timestamp":"2026-01-26 18:18:32.57300576 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 26 18:18:32.688104 containerd[1597]: 2026-01-26 18:18:32.574 [INFO][3963] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 26 18:18:32.688104 containerd[1597]: 2026-01-26 18:18:32.574 [INFO][3963] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 26 18:18:32.688104 containerd[1597]: 2026-01-26 18:18:32.574 [INFO][3963] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 26 18:18:32.688104 containerd[1597]: 2026-01-26 18:18:32.591 [INFO][3963] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.25ddc73dc9073c29c46a804fe3af9a36f5e0fbe443c8089db4f4b582af62c78c" host="localhost" Jan 26 18:18:32.688104 containerd[1597]: 2026-01-26 18:18:32.606 [INFO][3963] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 26 18:18:32.688104 containerd[1597]: 2026-01-26 18:18:32.614 [INFO][3963] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 26 18:18:32.688104 containerd[1597]: 2026-01-26 18:18:32.619 [INFO][3963] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 26 18:18:32.688104 containerd[1597]: 2026-01-26 18:18:32.622 [INFO][3963] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 26 18:18:32.688104 containerd[1597]: 2026-01-26 18:18:32.622 [INFO][3963] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.25ddc73dc9073c29c46a804fe3af9a36f5e0fbe443c8089db4f4b582af62c78c" host="localhost" Jan 26 18:18:32.688345 containerd[1597]: 2026-01-26 18:18:32.624 [INFO][3963] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.25ddc73dc9073c29c46a804fe3af9a36f5e0fbe443c8089db4f4b582af62c78c Jan 26 18:18:32.688345 containerd[1597]: 2026-01-26 18:18:32.630 [INFO][3963] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.25ddc73dc9073c29c46a804fe3af9a36f5e0fbe443c8089db4f4b582af62c78c" host="localhost" Jan 26 18:18:32.688345 containerd[1597]: 2026-01-26 18:18:32.636 [INFO][3963] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.25ddc73dc9073c29c46a804fe3af9a36f5e0fbe443c8089db4f4b582af62c78c" host="localhost" Jan 26 18:18:32.688345 containerd[1597]: 2026-01-26 18:18:32.636 [INFO][3963] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.25ddc73dc9073c29c46a804fe3af9a36f5e0fbe443c8089db4f4b582af62c78c" host="localhost" Jan 26 18:18:32.688345 containerd[1597]: 2026-01-26 18:18:32.636 [INFO][3963] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 26 18:18:32.688345 containerd[1597]: 2026-01-26 18:18:32.636 [INFO][3963] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="25ddc73dc9073c29c46a804fe3af9a36f5e0fbe443c8089db4f4b582af62c78c" HandleID="k8s-pod-network.25ddc73dc9073c29c46a804fe3af9a36f5e0fbe443c8089db4f4b582af62c78c" Workload="localhost-k8s-whisker--bf8465869--rt8sq-eth0" Jan 26 18:18:32.688509 containerd[1597]: 2026-01-26 18:18:32.639 [INFO][3949] cni-plugin/k8s.go 418: Populated endpoint ContainerID="25ddc73dc9073c29c46a804fe3af9a36f5e0fbe443c8089db4f4b582af62c78c" Namespace="calico-system" Pod="whisker-bf8465869-rt8sq" WorkloadEndpoint="localhost-k8s-whisker--bf8465869--rt8sq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--bf8465869--rt8sq-eth0", GenerateName:"whisker-bf8465869-", Namespace:"calico-system", SelfLink:"", UID:"7dfdaf1a-1fc3-4ffb-9232-005b9874f7f8", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2026, time.January, 26, 18, 18, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"bf8465869", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-bf8465869-rt8sq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia18df8605e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 26 18:18:32.688509 containerd[1597]: 2026-01-26 18:18:32.639 [INFO][3949] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="25ddc73dc9073c29c46a804fe3af9a36f5e0fbe443c8089db4f4b582af62c78c" Namespace="calico-system" Pod="whisker-bf8465869-rt8sq" WorkloadEndpoint="localhost-k8s-whisker--bf8465869--rt8sq-eth0" Jan 26 18:18:32.688621 containerd[1597]: 2026-01-26 18:18:32.640 [INFO][3949] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia18df8605e0 ContainerID="25ddc73dc9073c29c46a804fe3af9a36f5e0fbe443c8089db4f4b582af62c78c" Namespace="calico-system" Pod="whisker-bf8465869-rt8sq" WorkloadEndpoint="localhost-k8s-whisker--bf8465869--rt8sq-eth0" Jan 26 18:18:32.688621 containerd[1597]: 2026-01-26 18:18:32.661 [INFO][3949] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="25ddc73dc9073c29c46a804fe3af9a36f5e0fbe443c8089db4f4b582af62c78c" Namespace="calico-system" Pod="whisker-bf8465869-rt8sq" WorkloadEndpoint="localhost-k8s-whisker--bf8465869--rt8sq-eth0" Jan 26 18:18:32.688667 containerd[1597]: 2026-01-26 18:18:32.661 [INFO][3949] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="25ddc73dc9073c29c46a804fe3af9a36f5e0fbe443c8089db4f4b582af62c78c" Namespace="calico-system" Pod="whisker-bf8465869-rt8sq" WorkloadEndpoint="localhost-k8s-whisker--bf8465869--rt8sq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--bf8465869--rt8sq-eth0", GenerateName:"whisker-bf8465869-", Namespace:"calico-system", SelfLink:"", UID:"7dfdaf1a-1fc3-4ffb-9232-005b9874f7f8", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2026, time.January, 26, 18, 18, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"bf8465869", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"25ddc73dc9073c29c46a804fe3af9a36f5e0fbe443c8089db4f4b582af62c78c", Pod:"whisker-bf8465869-rt8sq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia18df8605e0", MAC:"be:e8:15:97:1b:26", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 26 18:18:32.688741 containerd[1597]: 2026-01-26 18:18:32.684 [INFO][3949] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="25ddc73dc9073c29c46a804fe3af9a36f5e0fbe443c8089db4f4b582af62c78c" Namespace="calico-system" Pod="whisker-bf8465869-rt8sq" WorkloadEndpoint="localhost-k8s-whisker--bf8465869--rt8sq-eth0" Jan 26 18:18:32.841000 audit: BPF prog-id=175 op=LOAD Jan 26 18:18:32.841000 audit[4110]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcae4ff050 a2=98 a3=1fffffffffffffff items=0 ppid=3990 pid=4110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:32.841000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 26 18:18:32.842000 audit: BPF prog-id=175 op=UNLOAD Jan 26 18:18:32.842000 audit[4110]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffcae4ff020 a3=0 items=0 ppid=3990 pid=4110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:32.842000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 26 18:18:32.842000 audit: BPF prog-id=176 op=LOAD Jan 26 18:18:32.842000 audit[4110]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcae4fef30 a2=94 a3=3 items=0 ppid=3990 pid=4110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:32.842000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 26 18:18:32.842000 audit: BPF prog-id=176 op=UNLOAD Jan 26 18:18:32.842000 audit[4110]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcae4fef30 a2=94 a3=3 items=0 ppid=3990 pid=4110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:32.842000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 26 18:18:32.842000 audit: BPF prog-id=177 op=LOAD Jan 26 18:18:32.842000 audit[4110]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcae4fef70 a2=94 a3=7ffcae4ff150 items=0 ppid=3990 pid=4110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:32.842000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 26 18:18:32.842000 audit: BPF prog-id=177 op=UNLOAD Jan 26 18:18:32.842000 audit[4110]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcae4fef70 a2=94 a3=7ffcae4ff150 items=0 ppid=3990 pid=4110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:32.842000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 26 18:18:32.843000 audit: BPF prog-id=178 op=LOAD Jan 26 18:18:32.843000 audit[4113]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd44890430 a2=98 a3=3 items=0 ppid=3990 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:32.843000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 26 18:18:32.844000 audit: BPF prog-id=178 op=UNLOAD Jan 26 18:18:32.844000 audit[4113]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd44890400 a3=0 items=0 ppid=3990 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:32.844000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 26 18:18:32.844000 audit: BPF prog-id=179 op=LOAD Jan 26 18:18:32.844000 audit[4113]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd44890220 a2=94 a3=54428f items=0 ppid=3990 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:32.844000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 26 18:18:32.844000 audit: BPF prog-id=179 op=UNLOAD Jan 26 18:18:32.844000 audit[4113]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd44890220 a2=94 a3=54428f items=0 ppid=3990 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:32.844000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 26 18:18:32.844000 audit: BPF prog-id=180 op=LOAD Jan 26 18:18:32.844000 audit[4113]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd44890250 a2=94 a3=2 items=0 ppid=3990 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:32.844000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 26 18:18:32.844000 audit: BPF prog-id=180 op=UNLOAD Jan 26 18:18:32.844000 audit[4113]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd44890250 a2=0 a3=2 items=0 ppid=3990 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:32.844000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 26 18:18:32.852199 containerd[1597]: time="2026-01-26T18:18:32.851661807Z" level=info msg="connecting to shim 25ddc73dc9073c29c46a804fe3af9a36f5e0fbe443c8089db4f4b582af62c78c" address="unix:///run/containerd/s/dd200e26eaeea02de6a5cf341ea4d0da331b2ac70a2556ceb0d24edbfac64ebb" namespace=k8s.io protocol=ttrpc version=3 Jan 26 18:18:32.920352 systemd[1]: Started cri-containerd-25ddc73dc9073c29c46a804fe3af9a36f5e0fbe443c8089db4f4b582af62c78c.scope - libcontainer container 25ddc73dc9073c29c46a804fe3af9a36f5e0fbe443c8089db4f4b582af62c78c. Jan 26 18:18:32.938000 audit: BPF prog-id=181 op=LOAD Jan 26 18:18:32.938000 audit: BPF prog-id=182 op=LOAD Jan 26 18:18:32.938000 audit[4126]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4114 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:32.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235646463373364633930373363323963343661383034666533616639 Jan 26 18:18:32.938000 audit: BPF prog-id=182 op=UNLOAD Jan 26 18:18:32.938000 audit[4126]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4114 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:32.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235646463373364633930373363323963343661383034666533616639 Jan 26 18:18:32.939000 audit: BPF prog-id=183 op=LOAD Jan 26 18:18:32.939000 audit[4126]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4114 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:32.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235646463373364633930373363323963343661383034666533616639 Jan 26 18:18:32.939000 audit: BPF prog-id=184 op=LOAD Jan 26 18:18:32.939000 audit[4126]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4114 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:32.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235646463373364633930373363323963343661383034666533616639 Jan 26 18:18:32.939000 audit: BPF prog-id=184 op=UNLOAD Jan 26 18:18:32.939000 audit[4126]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4114 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:32.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235646463373364633930373363323963343661383034666533616639 Jan 26 18:18:32.939000 audit: BPF prog-id=183 op=UNLOAD Jan 26 18:18:32.939000 audit[4126]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4114 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:32.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235646463373364633930373363323963343661383034666533616639 Jan 26 18:18:32.939000 audit: BPF prog-id=185 op=LOAD Jan 26 18:18:32.939000 audit[4126]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4114 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:32.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235646463373364633930373363323963343661383034666533616639 Jan 26 18:18:32.942577 systemd-resolved[1289]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 26 18:18:32.977037 kubelet[2826]: I0126 18:18:32.976713 2826 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 18:18:32.983038 kubelet[2826]: E0126 18:18:32.983000 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:18:33.011744 containerd[1597]: time="2026-01-26T18:18:33.011710600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bf8465869-rt8sq,Uid:7dfdaf1a-1fc3-4ffb-9232-005b9874f7f8,Namespace:calico-system,Attempt:0,} returns sandbox id \"25ddc73dc9073c29c46a804fe3af9a36f5e0fbe443c8089db4f4b582af62c78c\"" Jan 26 18:18:33.016673 containerd[1597]: time="2026-01-26T18:18:33.016483733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 26 18:18:33.060000 audit: BPF prog-id=186 op=LOAD Jan 26 18:18:33.060000 audit[4113]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd44890110 a2=94 a3=1 items=0 ppid=3990 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.060000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 26 18:18:33.060000 audit: BPF prog-id=186 op=UNLOAD Jan 26 18:18:33.060000 audit[4113]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd44890110 a2=94 a3=1 items=0 ppid=3990 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.060000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 26 18:18:33.075000 audit: BPF prog-id=187 op=LOAD Jan 26 18:18:33.075000 audit[4113]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd44890100 a2=94 a3=4 items=0 ppid=3990 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.075000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 26 18:18:33.076000 audit: BPF prog-id=187 op=UNLOAD Jan 26 18:18:33.076000 audit[4113]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd44890100 a2=0 a3=4 items=0 ppid=3990 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.076000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 26 18:18:33.076000 audit: BPF prog-id=188 op=LOAD Jan 26 18:18:33.076000 audit[4113]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd4488ff60 a2=94 a3=5 items=0 ppid=3990 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.076000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 26 18:18:33.076000 audit: BPF prog-id=188 op=UNLOAD Jan 26 18:18:33.076000 audit[4113]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd4488ff60 a2=0 a3=5 items=0 ppid=3990 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.076000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 26 18:18:33.076000 audit: BPF prog-id=189 op=LOAD Jan 26 18:18:33.076000 audit[4113]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd44890180 a2=94 a3=6 items=0 ppid=3990 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.076000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 26 18:18:33.076000 audit: BPF prog-id=189 op=UNLOAD Jan 26 18:18:33.076000 audit[4113]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd44890180 a2=0 a3=6 items=0 ppid=3990 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.076000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 26 18:18:33.076000 audit: BPF prog-id=190 op=LOAD Jan 26 18:18:33.076000 audit[4113]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd4488f930 a2=94 a3=88 items=0 ppid=3990 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.076000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 26 18:18:33.077000 audit: BPF prog-id=191 op=LOAD Jan 26 18:18:33.077000 audit[4113]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffd4488f7b0 a2=94 a3=2 items=0 ppid=3990 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.077000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 26 18:18:33.077000 audit: BPF prog-id=191 op=UNLOAD Jan 26 18:18:33.077000 audit[4113]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffd4488f7e0 a2=0 a3=7ffd4488f8e0 items=0 ppid=3990 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.077000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 26 18:18:33.077000 audit: BPF prog-id=190 op=UNLOAD Jan 26 18:18:33.077000 audit[4113]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=2d00d10 a2=0 a3=9e81713f71dc0ece items=0 ppid=3990 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.077000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 26 18:18:33.080321 containerd[1597]: time="2026-01-26T18:18:33.078138788Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:18:33.080994 containerd[1597]: time="2026-01-26T18:18:33.080627567Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 26 18:18:33.080994 containerd[1597]: time="2026-01-26T18:18:33.080698757Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 26 18:18:33.082048 kubelet[2826]: E0126 18:18:33.081785 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 26 18:18:33.082147 kubelet[2826]: E0126 18:18:33.082128 2826 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 26 18:18:33.094401 kubelet[2826]: E0126 18:18:33.094130 2826 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:84b790fb7a3241c08a1d443333567276,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bgncg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-bf8465869-rt8sq_calico-system(7dfdaf1a-1fc3-4ffb-9232-005b9874f7f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 26 18:18:33.098895 containerd[1597]: time="2026-01-26T18:18:33.098705211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 26 18:18:33.108000 audit: BPF prog-id=192 op=LOAD Jan 26 18:18:33.108000 audit[4154]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffefd94bf10 a2=98 a3=1999999999999999 items=0 ppid=3990 pid=4154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.108000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 26 18:18:33.108000 audit: BPF prog-id=192 op=UNLOAD Jan 26 18:18:33.108000 audit[4154]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffefd94bee0 a3=0 items=0 ppid=3990 pid=4154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.108000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 26 18:18:33.108000 audit: BPF prog-id=193 op=LOAD Jan 26 18:18:33.108000 audit[4154]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffefd94bdf0 a2=94 a3=ffff items=0 ppid=3990 pid=4154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.108000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 26 18:18:33.108000 audit: BPF prog-id=193 op=UNLOAD Jan 26 18:18:33.108000 audit[4154]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffefd94bdf0 a2=94 a3=ffff items=0 ppid=3990 pid=4154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.108000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 26 18:18:33.108000 audit: BPF prog-id=194 op=LOAD Jan 26 18:18:33.108000 audit[4154]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffefd94be30 a2=94 a3=7ffefd94c010 items=0 ppid=3990 pid=4154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.108000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 26 18:18:33.108000 audit: BPF prog-id=194 op=UNLOAD Jan 26 18:18:33.108000 audit[4154]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffefd94be30 a2=94 a3=7ffefd94c010 items=0 ppid=3990 pid=4154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.108000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 26 18:18:33.167471 containerd[1597]: time="2026-01-26T18:18:33.167409670Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:18:33.169780 containerd[1597]: time="2026-01-26T18:18:33.169713885Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 26 18:18:33.169933 containerd[1597]: time="2026-01-26T18:18:33.169892327Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 26 18:18:33.170418 kubelet[2826]: E0126 18:18:33.170361 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 26 18:18:33.170551 kubelet[2826]: E0126 18:18:33.170432 2826 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 26 18:18:33.170732 kubelet[2826]: E0126 18:18:33.170670 2826 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bgncg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-bf8465869-rt8sq_calico-system(7dfdaf1a-1fc3-4ffb-9232-005b9874f7f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 26 18:18:33.173596 kubelet[2826]: E0126 18:18:33.173415 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-bf8465869-rt8sq" podUID="7dfdaf1a-1fc3-4ffb-9232-005b9874f7f8" Jan 26 18:18:33.200743 systemd-networkd[1518]: vxlan.calico: Link UP Jan 26 18:18:33.200774 systemd-networkd[1518]: vxlan.calico: Gained carrier Jan 26 18:18:33.229000 audit: BPF prog-id=195 op=LOAD Jan 26 18:18:33.229000 audit[4178]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe692ebb70 a2=98 a3=0 items=0 ppid=3990 pid=4178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.229000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 26 18:18:33.229000 audit: BPF prog-id=195 op=UNLOAD Jan 26 18:18:33.229000 audit[4178]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe692ebb40 a3=0 items=0 ppid=3990 pid=4178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.229000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 26 18:18:33.232000 audit: BPF prog-id=196 op=LOAD Jan 26 18:18:33.232000 audit[4178]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe692eb980 a2=94 a3=54428f items=0 ppid=3990 pid=4178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.232000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 26 18:18:33.232000 audit: BPF prog-id=196 op=UNLOAD Jan 26 18:18:33.232000 audit[4178]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe692eb980 a2=94 a3=54428f items=0 ppid=3990 pid=4178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.232000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 26 18:18:33.232000 audit: BPF prog-id=197 op=LOAD Jan 26 18:18:33.232000 audit[4178]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe692eb9b0 a2=94 a3=2 items=0 ppid=3990 pid=4178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.232000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 26 18:18:33.232000 audit: BPF prog-id=197 op=UNLOAD Jan 26 18:18:33.232000 audit[4178]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe692eb9b0 a2=0 a3=2 items=0 ppid=3990 pid=4178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.232000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 26 18:18:33.232000 audit: BPF prog-id=198 op=LOAD Jan 26 18:18:33.232000 audit[4178]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe692eb760 a2=94 a3=4 items=0 ppid=3990 pid=4178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.232000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 26 18:18:33.232000 audit: BPF prog-id=198 op=UNLOAD Jan 26 18:18:33.232000 audit[4178]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe692eb760 a2=94 a3=4 items=0 ppid=3990 pid=4178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.232000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 26 18:18:33.232000 audit: BPF prog-id=199 op=LOAD Jan 26 18:18:33.232000 audit[4178]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe692eb860 a2=94 a3=7ffe692eb9e0 items=0 ppid=3990 pid=4178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.232000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 26 18:18:33.232000 audit: BPF prog-id=199 op=UNLOAD Jan 26 18:18:33.232000 audit[4178]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe692eb860 a2=0 a3=7ffe692eb9e0 items=0 ppid=3990 pid=4178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.232000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 26 18:18:33.234000 audit: BPF prog-id=200 op=LOAD Jan 26 18:18:33.234000 audit[4178]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe692eaf90 a2=94 a3=2 items=0 ppid=3990 pid=4178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.234000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 26 18:18:33.234000 audit: BPF prog-id=200 op=UNLOAD Jan 26 18:18:33.234000 audit[4178]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe692eaf90 a2=0 a3=2 items=0 ppid=3990 pid=4178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.234000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 26 18:18:33.234000 audit: BPF prog-id=201 op=LOAD Jan 26 18:18:33.234000 audit[4178]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe692eb090 a2=94 a3=30 items=0 ppid=3990 pid=4178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.234000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 26 18:18:33.237944 kubelet[2826]: I0126 18:18:33.237799 2826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b97323b3-2319-4d70-9a63-c4ff89b68e41" path="/var/lib/kubelet/pods/b97323b3-2319-4d70-9a63-c4ff89b68e41/volumes" Jan 26 18:18:33.246000 audit: BPF prog-id=202 op=LOAD Jan 26 18:18:33.246000 audit[4187]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc00ec68b0 a2=98 a3=0 items=0 ppid=3990 pid=4187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.246000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 26 18:18:33.246000 audit: BPF prog-id=202 op=UNLOAD Jan 26 18:18:33.246000 audit[4187]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc00ec6880 a3=0 items=0 ppid=3990 pid=4187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.246000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 26 18:18:33.246000 audit: BPF prog-id=203 op=LOAD Jan 26 18:18:33.246000 audit[4187]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc00ec66a0 a2=94 a3=54428f items=0 ppid=3990 pid=4187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.246000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 26 18:18:33.246000 audit: BPF prog-id=203 op=UNLOAD Jan 26 18:18:33.246000 audit[4187]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc00ec66a0 a2=94 a3=54428f items=0 ppid=3990 pid=4187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.246000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 26 18:18:33.246000 audit: BPF prog-id=204 op=LOAD Jan 26 18:18:33.246000 audit[4187]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc00ec66d0 a2=94 a3=2 items=0 ppid=3990 pid=4187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.246000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 26 18:18:33.246000 audit: BPF prog-id=204 op=UNLOAD Jan 26 18:18:33.246000 audit[4187]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc00ec66d0 a2=0 a3=2 items=0 ppid=3990 pid=4187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.246000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 26 18:18:33.417000 audit: BPF prog-id=205 op=LOAD Jan 26 18:18:33.417000 audit[4187]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc00ec6590 a2=94 a3=1 items=0 ppid=3990 pid=4187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.417000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 26 18:18:33.417000 audit: BPF prog-id=205 op=UNLOAD Jan 26 18:18:33.417000 audit[4187]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc00ec6590 a2=94 a3=1 items=0 ppid=3990 pid=4187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.417000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 26 18:18:33.427000 audit: BPF prog-id=206 op=LOAD Jan 26 18:18:33.427000 audit[4187]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc00ec6580 a2=94 a3=4 items=0 ppid=3990 pid=4187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.427000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 26 18:18:33.427000 audit: BPF prog-id=206 op=UNLOAD Jan 26 18:18:33.427000 audit[4187]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc00ec6580 a2=0 a3=4 items=0 ppid=3990 pid=4187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.427000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 26 18:18:33.427000 audit: BPF prog-id=207 op=LOAD Jan 26 18:18:33.427000 audit[4187]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc00ec63e0 a2=94 a3=5 items=0 ppid=3990 pid=4187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.427000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 26 18:18:33.427000 audit: BPF prog-id=207 op=UNLOAD Jan 26 18:18:33.427000 audit[4187]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc00ec63e0 a2=0 a3=5 items=0 ppid=3990 pid=4187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.427000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 26 18:18:33.427000 audit: BPF prog-id=208 op=LOAD Jan 26 18:18:33.427000 audit[4187]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc00ec6600 a2=94 a3=6 items=0 ppid=3990 pid=4187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.427000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 26 18:18:33.427000 audit: BPF prog-id=208 op=UNLOAD Jan 26 18:18:33.427000 audit[4187]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc00ec6600 a2=0 a3=6 items=0 ppid=3990 pid=4187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.427000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 26 18:18:33.428000 audit: BPF prog-id=209 op=LOAD Jan 26 18:18:33.428000 audit[4187]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc00ec5db0 a2=94 a3=88 items=0 ppid=3990 pid=4187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.428000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 26 18:18:33.428000 audit: BPF prog-id=210 op=LOAD Jan 26 18:18:33.428000 audit[4187]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffc00ec5c30 a2=94 a3=2 items=0 ppid=3990 pid=4187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.428000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 26 18:18:33.428000 audit: BPF prog-id=210 op=UNLOAD Jan 26 18:18:33.428000 audit[4187]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffc00ec5c60 a2=0 a3=7ffc00ec5d60 items=0 ppid=3990 pid=4187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.428000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 26 18:18:33.428000 audit: BPF prog-id=209 op=UNLOAD Jan 26 18:18:33.428000 audit[4187]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=af63d10 a2=0 a3=fcb64fd1c18b5c8d items=0 ppid=3990 pid=4187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.428000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 26 18:18:33.443000 audit: BPF prog-id=201 op=UNLOAD Jan 26 18:18:33.443000 audit[3990]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c0009d8100 a2=0 a3=0 items=0 ppid=3972 pid=3990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.443000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 26 18:18:33.508000 audit[4211]: NETFILTER_CFG table=mangle:119 family=2 entries=16 op=nft_register_chain pid=4211 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 26 18:18:33.508000 audit[4211]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fff894bbda0 a2=0 a3=7fff894bbd8c items=0 ppid=3990 pid=4211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.508000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 26 18:18:33.519000 audit[4216]: NETFILTER_CFG table=nat:120 family=2 entries=15 op=nft_register_chain pid=4216 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 26 18:18:33.519000 audit[4216]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffcaaa9b1d0 a2=0 a3=7ffcaaa9b1bc items=0 ppid=3990 pid=4216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.519000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 26 18:18:33.523000 audit[4212]: NETFILTER_CFG table=raw:121 family=2 entries=21 op=nft_register_chain pid=4212 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 26 18:18:33.523000 audit[4212]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffc98aa6580 a2=0 a3=7ffc98aa656c items=0 ppid=3990 pid=4212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.523000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 26 18:18:33.523000 audit[4213]: NETFILTER_CFG table=filter:122 family=2 entries=94 op=nft_register_chain pid=4213 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 26 18:18:33.523000 audit[4213]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffffe5b1080 a2=0 a3=7ffffe5b106c items=0 ppid=3990 pid=4213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:33.523000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 26 18:18:33.979153 kubelet[2826]: E0126 18:18:33.979093 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-bf8465869-rt8sq" podUID="7dfdaf1a-1fc3-4ffb-9232-005b9874f7f8" Jan 26 18:18:34.006000 audit[4224]: NETFILTER_CFG table=filter:123 family=2 entries=20 op=nft_register_rule pid=4224 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:18:34.006000 audit[4224]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff5c13d2d0 a2=0 a3=7fff5c13d2bc items=0 ppid=2938 pid=4224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:34.006000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:18:34.019000 audit[4224]: NETFILTER_CFG table=nat:124 family=2 entries=14 op=nft_register_rule pid=4224 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:18:34.019000 audit[4224]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff5c13d2d0 a2=0 a3=0 items=0 ppid=2938 pid=4224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:34.019000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:18:34.408129 systemd-networkd[1518]: calia18df8605e0: Gained IPv6LL Jan 26 18:18:34.920221 systemd-networkd[1518]: vxlan.calico: Gained IPv6LL Jan 26 18:18:34.982384 kubelet[2826]: E0126 18:18:34.982345 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-bf8465869-rt8sq" podUID="7dfdaf1a-1fc3-4ffb-9232-005b9874f7f8" Jan 26 18:18:35.570927 systemd[1]: Started sshd@9-10.0.0.64:22-10.0.0.1:40956.service - OpenSSH per-connection server daemon (10.0.0.1:40956). Jan 26 18:18:35.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.64:22-10.0.0.1:40956 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:18:35.677000 audit[4233]: USER_ACCT pid=4233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:35.679987 sshd[4233]: Accepted publickey for core from 10.0.0.1 port 40956 ssh2: RSA SHA256:FBsBDqAn2CSSEioDgYmIdR2sMG3PuCcnBTUzWqQs7BE Jan 26 18:18:35.679000 audit[4233]: CRED_ACQ pid=4233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:35.679000 audit[4233]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc7d3825b0 a2=3 a3=0 items=0 ppid=1 pid=4233 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:35.679000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:18:35.682460 sshd-session[4233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:18:35.688702 systemd-logind[1579]: New session 11 of user core. Jan 26 18:18:35.699019 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 26 18:18:35.700000 audit[4233]: USER_START pid=4233 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:35.702000 audit[4237]: CRED_ACQ pid=4237 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:35.811039 sshd[4237]: Connection closed by 10.0.0.1 port 40956 Jan 26 18:18:35.811362 sshd-session[4233]: pam_unix(sshd:session): session closed for user core Jan 26 18:18:35.810000 audit[4233]: USER_END pid=4233 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:35.814707 kernel: kauditd_printk_skb: 239 callbacks suppressed Jan 26 18:18:35.814773 kernel: audit: type=1106 audit(1769451515.810:655): pid=4233 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:35.815457 systemd[1]: sshd@9-10.0.0.64:22-10.0.0.1:40956.service: Deactivated successfully. Jan 26 18:18:35.817489 systemd[1]: session-11.scope: Deactivated successfully. Jan 26 18:18:35.818784 systemd-logind[1579]: Session 11 logged out. Waiting for processes to exit. Jan 26 18:18:35.820194 systemd-logind[1579]: Removed session 11. Jan 26 18:18:35.811000 audit[4233]: CRED_DISP pid=4233 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:35.835263 kernel: audit: type=1104 audit(1769451515.811:656): pid=4233 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:35.835317 kernel: audit: type=1131 audit(1769451515.814:657): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.64:22-10.0.0.1:40956 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:18:35.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.64:22-10.0.0.1:40956 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:18:36.230735 kubelet[2826]: E0126 18:18:36.230604 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:18:36.231206 containerd[1597]: time="2026-01-26T18:18:36.231044464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xxkhp,Uid:0f7dc91b-7e8b-44a4-acf9-cafa269e417b,Namespace:kube-system,Attempt:0,}" Jan 26 18:18:36.343393 systemd-networkd[1518]: calid2d6368492e: Link UP Jan 26 18:18:36.344442 systemd-networkd[1518]: calid2d6368492e: Gained carrier Jan 26 18:18:36.358038 containerd[1597]: 2026-01-26 18:18:36.274 [INFO][4251] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--xxkhp-eth0 coredns-668d6bf9bc- kube-system 0f7dc91b-7e8b-44a4-acf9-cafa269e417b 825 0 2026-01-26 18:17:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-xxkhp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid2d6368492e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="26b1b911f08d8652defa5f8eeb5d55b0b34174db193f8b8d77753db1ae525f88" Namespace="kube-system" Pod="coredns-668d6bf9bc-xxkhp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xxkhp-" Jan 26 18:18:36.358038 containerd[1597]: 2026-01-26 18:18:36.274 [INFO][4251] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="26b1b911f08d8652defa5f8eeb5d55b0b34174db193f8b8d77753db1ae525f88" Namespace="kube-system" Pod="coredns-668d6bf9bc-xxkhp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xxkhp-eth0" Jan 26 18:18:36.358038 containerd[1597]: 2026-01-26 18:18:36.304 [INFO][4266] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="26b1b911f08d8652defa5f8eeb5d55b0b34174db193f8b8d77753db1ae525f88" HandleID="k8s-pod-network.26b1b911f08d8652defa5f8eeb5d55b0b34174db193f8b8d77753db1ae525f88" Workload="localhost-k8s-coredns--668d6bf9bc--xxkhp-eth0" Jan 26 18:18:36.358295 containerd[1597]: 2026-01-26 18:18:36.304 [INFO][4266] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="26b1b911f08d8652defa5f8eeb5d55b0b34174db193f8b8d77753db1ae525f88" HandleID="k8s-pod-network.26b1b911f08d8652defa5f8eeb5d55b0b34174db193f8b8d77753db1ae525f88" Workload="localhost-k8s-coredns--668d6bf9bc--xxkhp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001355e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-xxkhp", "timestamp":"2026-01-26 18:18:36.304414289 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 26 18:18:36.358295 containerd[1597]: 2026-01-26 18:18:36.304 [INFO][4266] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 26 18:18:36.358295 containerd[1597]: 2026-01-26 18:18:36.304 [INFO][4266] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 26 18:18:36.358295 containerd[1597]: 2026-01-26 18:18:36.304 [INFO][4266] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 26 18:18:36.358295 containerd[1597]: 2026-01-26 18:18:36.311 [INFO][4266] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.26b1b911f08d8652defa5f8eeb5d55b0b34174db193f8b8d77753db1ae525f88" host="localhost" Jan 26 18:18:36.358295 containerd[1597]: 2026-01-26 18:18:36.316 [INFO][4266] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 26 18:18:36.358295 containerd[1597]: 2026-01-26 18:18:36.322 [INFO][4266] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 26 18:18:36.358295 containerd[1597]: 2026-01-26 18:18:36.324 [INFO][4266] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 26 18:18:36.358295 containerd[1597]: 2026-01-26 18:18:36.326 [INFO][4266] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 26 18:18:36.358295 containerd[1597]: 2026-01-26 18:18:36.326 [INFO][4266] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.26b1b911f08d8652defa5f8eeb5d55b0b34174db193f8b8d77753db1ae525f88" host="localhost" Jan 26 18:18:36.358531 containerd[1597]: 2026-01-26 18:18:36.328 [INFO][4266] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.26b1b911f08d8652defa5f8eeb5d55b0b34174db193f8b8d77753db1ae525f88 Jan 26 18:18:36.358531 containerd[1597]: 2026-01-26 18:18:36.333 [INFO][4266] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.26b1b911f08d8652defa5f8eeb5d55b0b34174db193f8b8d77753db1ae525f88" host="localhost" Jan 26 18:18:36.358531 containerd[1597]: 2026-01-26 18:18:36.337 [INFO][4266] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.26b1b911f08d8652defa5f8eeb5d55b0b34174db193f8b8d77753db1ae525f88" host="localhost" Jan 26 18:18:36.358531 containerd[1597]: 2026-01-26 18:18:36.337 [INFO][4266] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.26b1b911f08d8652defa5f8eeb5d55b0b34174db193f8b8d77753db1ae525f88" host="localhost" Jan 26 18:18:36.358531 containerd[1597]: 2026-01-26 18:18:36.338 [INFO][4266] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 26 18:18:36.358531 containerd[1597]: 2026-01-26 18:18:36.338 [INFO][4266] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="26b1b911f08d8652defa5f8eeb5d55b0b34174db193f8b8d77753db1ae525f88" HandleID="k8s-pod-network.26b1b911f08d8652defa5f8eeb5d55b0b34174db193f8b8d77753db1ae525f88" Workload="localhost-k8s-coredns--668d6bf9bc--xxkhp-eth0" Jan 26 18:18:36.358631 containerd[1597]: 2026-01-26 18:18:36.340 [INFO][4251] cni-plugin/k8s.go 418: Populated endpoint ContainerID="26b1b911f08d8652defa5f8eeb5d55b0b34174db193f8b8d77753db1ae525f88" Namespace="kube-system" Pod="coredns-668d6bf9bc-xxkhp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xxkhp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--xxkhp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0f7dc91b-7e8b-44a4-acf9-cafa269e417b", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2026, time.January, 26, 18, 17, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-xxkhp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid2d6368492e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 26 18:18:36.358708 containerd[1597]: 2026-01-26 18:18:36.340 [INFO][4251] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="26b1b911f08d8652defa5f8eeb5d55b0b34174db193f8b8d77753db1ae525f88" Namespace="kube-system" Pod="coredns-668d6bf9bc-xxkhp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xxkhp-eth0" Jan 26 18:18:36.358708 containerd[1597]: 2026-01-26 18:18:36.340 [INFO][4251] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid2d6368492e ContainerID="26b1b911f08d8652defa5f8eeb5d55b0b34174db193f8b8d77753db1ae525f88" Namespace="kube-system" Pod="coredns-668d6bf9bc-xxkhp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xxkhp-eth0" Jan 26 18:18:36.358708 containerd[1597]: 2026-01-26 18:18:36.344 [INFO][4251] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="26b1b911f08d8652defa5f8eeb5d55b0b34174db193f8b8d77753db1ae525f88" Namespace="kube-system" Pod="coredns-668d6bf9bc-xxkhp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xxkhp-eth0" Jan 26 18:18:36.358772 containerd[1597]: 2026-01-26 18:18:36.344 [INFO][4251] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="26b1b911f08d8652defa5f8eeb5d55b0b34174db193f8b8d77753db1ae525f88" Namespace="kube-system" Pod="coredns-668d6bf9bc-xxkhp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xxkhp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--xxkhp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0f7dc91b-7e8b-44a4-acf9-cafa269e417b", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2026, time.January, 26, 18, 17, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"26b1b911f08d8652defa5f8eeb5d55b0b34174db193f8b8d77753db1ae525f88", Pod:"coredns-668d6bf9bc-xxkhp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid2d6368492e", MAC:"ba:83:e8:23:b4:cb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 26 18:18:36.358772 containerd[1597]: 2026-01-26 18:18:36.352 [INFO][4251] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="26b1b911f08d8652defa5f8eeb5d55b0b34174db193f8b8d77753db1ae525f88" Namespace="kube-system" Pod="coredns-668d6bf9bc-xxkhp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xxkhp-eth0" Jan 26 18:18:36.367000 audit[4283]: NETFILTER_CFG table=filter:125 family=2 entries=42 op=nft_register_chain pid=4283 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 26 18:18:36.367000 audit[4283]: SYSCALL arch=c000003e syscall=46 success=yes exit=22552 a0=3 a1=7fff5942aa50 a2=0 a3=7fff5942aa3c items=0 ppid=3990 pid=4283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:36.385805 kernel: audit: type=1325 audit(1769451516.367:658): table=filter:125 family=2 entries=42 op=nft_register_chain pid=4283 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 26 18:18:36.385916 kernel: audit: type=1300 audit(1769451516.367:658): arch=c000003e syscall=46 success=yes exit=22552 a0=3 a1=7fff5942aa50 a2=0 a3=7fff5942aa3c items=0 ppid=3990 pid=4283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:36.367000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 26 18:18:36.392324 containerd[1597]: time="2026-01-26T18:18:36.386697575Z" level=info msg="connecting to shim 26b1b911f08d8652defa5f8eeb5d55b0b34174db193f8b8d77753db1ae525f88" address="unix:///run/containerd/s/de6e38ada48ff80ce442cb401358c4497834b7b97f66fdff87dd948205f1aa96" namespace=k8s.io protocol=ttrpc version=3 Jan 26 18:18:36.392957 kernel: audit: type=1327 audit(1769451516.367:658): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 26 18:18:36.441051 systemd[1]: Started cri-containerd-26b1b911f08d8652defa5f8eeb5d55b0b34174db193f8b8d77753db1ae525f88.scope - libcontainer container 26b1b911f08d8652defa5f8eeb5d55b0b34174db193f8b8d77753db1ae525f88. Jan 26 18:18:36.452000 audit: BPF prog-id=211 op=LOAD Jan 26 18:18:36.456905 kernel: audit: type=1334 audit(1769451516.452:659): prog-id=211 op=LOAD Jan 26 18:18:36.456958 kernel: audit: type=1334 audit(1769451516.453:660): prog-id=212 op=LOAD Jan 26 18:18:36.453000 audit: BPF prog-id=212 op=LOAD Jan 26 18:18:36.457074 systemd-resolved[1289]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 26 18:18:36.453000 audit[4302]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4291 pid=4302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:36.468092 kernel: audit: type=1300 audit(1769451516.453:660): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4291 pid=4302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:36.476987 kernel: audit: type=1327 audit(1769451516.453:660): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236623162393131663038643836353264656661356638656562356435 Jan 26 18:18:36.453000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236623162393131663038643836353264656661356638656562356435 Jan 26 18:18:36.453000 audit: BPF prog-id=212 op=UNLOAD Jan 26 18:18:36.453000 audit[4302]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4291 pid=4302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:36.453000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236623162393131663038643836353264656661356638656562356435 Jan 26 18:18:36.453000 audit: BPF prog-id=213 op=LOAD Jan 26 18:18:36.453000 audit[4302]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4291 pid=4302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:36.453000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236623162393131663038643836353264656661356638656562356435 Jan 26 18:18:36.453000 audit: BPF prog-id=214 op=LOAD Jan 26 18:18:36.453000 audit[4302]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4291 pid=4302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:36.453000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236623162393131663038643836353264656661356638656562356435 Jan 26 18:18:36.453000 audit: BPF prog-id=214 op=UNLOAD Jan 26 18:18:36.453000 audit[4302]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4291 pid=4302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:36.453000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236623162393131663038643836353264656661356638656562356435 Jan 26 18:18:36.453000 audit: BPF prog-id=213 op=UNLOAD Jan 26 18:18:36.453000 audit[4302]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4291 pid=4302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:36.453000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236623162393131663038643836353264656661356638656562356435 Jan 26 18:18:36.453000 audit: BPF prog-id=215 op=LOAD Jan 26 18:18:36.453000 audit[4302]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4291 pid=4302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:36.453000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236623162393131663038643836353264656661356638656562356435 Jan 26 18:18:36.501902 containerd[1597]: time="2026-01-26T18:18:36.501762985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xxkhp,Uid:0f7dc91b-7e8b-44a4-acf9-cafa269e417b,Namespace:kube-system,Attempt:0,} returns sandbox id \"26b1b911f08d8652defa5f8eeb5d55b0b34174db193f8b8d77753db1ae525f88\"" Jan 26 18:18:36.502731 kubelet[2826]: E0126 18:18:36.502644 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:18:36.504421 containerd[1597]: time="2026-01-26T18:18:36.504370226Z" level=info msg="CreateContainer within sandbox \"26b1b911f08d8652defa5f8eeb5d55b0b34174db193f8b8d77753db1ae525f88\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 26 18:18:36.517480 containerd[1597]: time="2026-01-26T18:18:36.517394382Z" level=info msg="Container a16663bce4b98fc49390ae9af7d64ff817c897b0014654a3d845c95e8481d0ae: CDI devices from CRI Config.CDIDevices: []" Jan 26 18:18:36.518967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1570473748.mount: Deactivated successfully. Jan 26 18:18:36.527041 containerd[1597]: time="2026-01-26T18:18:36.526965209Z" level=info msg="CreateContainer within sandbox \"26b1b911f08d8652defa5f8eeb5d55b0b34174db193f8b8d77753db1ae525f88\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a16663bce4b98fc49390ae9af7d64ff817c897b0014654a3d845c95e8481d0ae\"" Jan 26 18:18:36.527675 containerd[1597]: time="2026-01-26T18:18:36.527592020Z" level=info msg="StartContainer for \"a16663bce4b98fc49390ae9af7d64ff817c897b0014654a3d845c95e8481d0ae\"" Jan 26 18:18:36.528523 containerd[1597]: time="2026-01-26T18:18:36.528473026Z" level=info msg="connecting to shim a16663bce4b98fc49390ae9af7d64ff817c897b0014654a3d845c95e8481d0ae" address="unix:///run/containerd/s/de6e38ada48ff80ce442cb401358c4497834b7b97f66fdff87dd948205f1aa96" protocol=ttrpc version=3 Jan 26 18:18:36.553040 systemd[1]: Started cri-containerd-a16663bce4b98fc49390ae9af7d64ff817c897b0014654a3d845c95e8481d0ae.scope - libcontainer container a16663bce4b98fc49390ae9af7d64ff817c897b0014654a3d845c95e8481d0ae. Jan 26 18:18:36.569000 audit: BPF prog-id=216 op=LOAD Jan 26 18:18:36.570000 audit: BPF prog-id=217 op=LOAD Jan 26 18:18:36.570000 audit[4328]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4291 pid=4328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:36.570000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131363636336263653462393866633439333930616539616637643634 Jan 26 18:18:36.570000 audit: BPF prog-id=217 op=UNLOAD Jan 26 18:18:36.570000 audit[4328]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4291 pid=4328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:36.570000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131363636336263653462393866633439333930616539616637643634 Jan 26 18:18:36.570000 audit: BPF prog-id=218 op=LOAD Jan 26 18:18:36.570000 audit[4328]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4291 pid=4328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:36.570000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131363636336263653462393866633439333930616539616637643634 Jan 26 18:18:36.570000 audit: BPF prog-id=219 op=LOAD Jan 26 18:18:36.570000 audit[4328]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4291 pid=4328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:36.570000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131363636336263653462393866633439333930616539616637643634 Jan 26 18:18:36.570000 audit: BPF prog-id=219 op=UNLOAD Jan 26 18:18:36.570000 audit[4328]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4291 pid=4328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:36.570000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131363636336263653462393866633439333930616539616637643634 Jan 26 18:18:36.570000 audit: BPF prog-id=218 op=UNLOAD Jan 26 18:18:36.570000 audit[4328]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4291 pid=4328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:36.570000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131363636336263653462393866633439333930616539616637643634 Jan 26 18:18:36.570000 audit: BPF prog-id=220 op=LOAD Jan 26 18:18:36.570000 audit[4328]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4291 pid=4328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:36.570000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131363636336263653462393866633439333930616539616637643634 Jan 26 18:18:36.593746 containerd[1597]: time="2026-01-26T18:18:36.593683409Z" level=info msg="StartContainer for \"a16663bce4b98fc49390ae9af7d64ff817c897b0014654a3d845c95e8481d0ae\" returns successfully" Jan 26 18:18:36.991254 kubelet[2826]: E0126 18:18:36.991228 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:18:37.002565 kubelet[2826]: I0126 18:18:37.002402 2826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-xxkhp" podStartSLOduration=38.002384538 podStartE2EDuration="38.002384538s" podCreationTimestamp="2026-01-26 18:17:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:18:37.002083624 +0000 UTC m=+43.932944327" watchObservedRunningTime="2026-01-26 18:18:37.002384538 +0000 UTC m=+43.933245250" Jan 26 18:18:37.016000 audit[4362]: NETFILTER_CFG table=filter:126 family=2 entries=20 op=nft_register_rule pid=4362 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:18:37.016000 audit[4362]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffda4b9b040 a2=0 a3=7ffda4b9b02c items=0 ppid=2938 pid=4362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:37.016000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:18:37.022000 audit[4362]: NETFILTER_CFG table=nat:127 family=2 entries=14 op=nft_register_rule pid=4362 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:18:37.022000 audit[4362]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffda4b9b040 a2=0 a3=0 items=0 ppid=2938 pid=4362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:37.022000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:18:37.046000 audit[4364]: NETFILTER_CFG table=filter:128 family=2 entries=17 op=nft_register_rule pid=4364 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:18:37.046000 audit[4364]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd691a37d0 a2=0 a3=7ffd691a37bc items=0 ppid=2938 pid=4364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:37.046000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:18:37.053000 audit[4364]: NETFILTER_CFG table=nat:129 family=2 entries=35 op=nft_register_chain pid=4364 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:18:37.053000 audit[4364]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffd691a37d0 a2=0 a3=7ffd691a37bc items=0 ppid=2938 pid=4364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:37.053000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:18:37.230582 containerd[1597]: time="2026-01-26T18:18:37.230530143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-hc8sx,Uid:e2e25c6b-345b-4146-b644-2efa5b4232f3,Namespace:calico-system,Attempt:0,}" Jan 26 18:18:37.230752 containerd[1597]: time="2026-01-26T18:18:37.230663200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6df84d7cd-llmh4,Uid:abefa45c-cb84-41f4-b47b-49099e1244be,Namespace:calico-system,Attempt:0,}" Jan 26 18:18:37.358947 systemd-networkd[1518]: cali6f4fc4bb144: Link UP Jan 26 18:18:37.359232 systemd-networkd[1518]: cali6f4fc4bb144: Gained carrier Jan 26 18:18:37.371588 containerd[1597]: 2026-01-26 18:18:37.282 [INFO][4365] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6df84d7cd--llmh4-eth0 calico-kube-controllers-6df84d7cd- calico-system abefa45c-cb84-41f4-b47b-49099e1244be 826 0 2026-01-26 18:18:13 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6df84d7cd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6df84d7cd-llmh4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6f4fc4bb144 [] [] }} ContainerID="7800c3819020f08f96d084286dbb81e76ea2059c5d7ee71e23e5c0b99f35bce0" Namespace="calico-system" Pod="calico-kube-controllers-6df84d7cd-llmh4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6df84d7cd--llmh4-" Jan 26 18:18:37.371588 containerd[1597]: 2026-01-26 18:18:37.282 [INFO][4365] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7800c3819020f08f96d084286dbb81e76ea2059c5d7ee71e23e5c0b99f35bce0" Namespace="calico-system" Pod="calico-kube-controllers-6df84d7cd-llmh4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6df84d7cd--llmh4-eth0" Jan 26 18:18:37.371588 containerd[1597]: 2026-01-26 18:18:37.312 [INFO][4395] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7800c3819020f08f96d084286dbb81e76ea2059c5d7ee71e23e5c0b99f35bce0" HandleID="k8s-pod-network.7800c3819020f08f96d084286dbb81e76ea2059c5d7ee71e23e5c0b99f35bce0" Workload="localhost-k8s-calico--kube--controllers--6df84d7cd--llmh4-eth0" Jan 26 18:18:37.371588 containerd[1597]: 2026-01-26 18:18:37.312 [INFO][4395] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7800c3819020f08f96d084286dbb81e76ea2059c5d7ee71e23e5c0b99f35bce0" HandleID="k8s-pod-network.7800c3819020f08f96d084286dbb81e76ea2059c5d7ee71e23e5c0b99f35bce0" Workload="localhost-k8s-calico--kube--controllers--6df84d7cd--llmh4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002871f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6df84d7cd-llmh4", "timestamp":"2026-01-26 18:18:37.312706516 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 26 18:18:37.371588 containerd[1597]: 2026-01-26 18:18:37.313 [INFO][4395] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 26 18:18:37.371588 containerd[1597]: 2026-01-26 18:18:37.313 [INFO][4395] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 26 18:18:37.371588 containerd[1597]: 2026-01-26 18:18:37.313 [INFO][4395] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 26 18:18:37.371588 containerd[1597]: 2026-01-26 18:18:37.321 [INFO][4395] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7800c3819020f08f96d084286dbb81e76ea2059c5d7ee71e23e5c0b99f35bce0" host="localhost" Jan 26 18:18:37.371588 containerd[1597]: 2026-01-26 18:18:37.326 [INFO][4395] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 26 18:18:37.371588 containerd[1597]: 2026-01-26 18:18:37.332 [INFO][4395] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 26 18:18:37.371588 containerd[1597]: 2026-01-26 18:18:37.333 [INFO][4395] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 26 18:18:37.371588 containerd[1597]: 2026-01-26 18:18:37.336 [INFO][4395] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 26 18:18:37.371588 containerd[1597]: 2026-01-26 18:18:37.336 [INFO][4395] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7800c3819020f08f96d084286dbb81e76ea2059c5d7ee71e23e5c0b99f35bce0" host="localhost" Jan 26 18:18:37.371588 containerd[1597]: 2026-01-26 18:18:37.338 [INFO][4395] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7800c3819020f08f96d084286dbb81e76ea2059c5d7ee71e23e5c0b99f35bce0 Jan 26 18:18:37.371588 containerd[1597]: 2026-01-26 18:18:37.342 [INFO][4395] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7800c3819020f08f96d084286dbb81e76ea2059c5d7ee71e23e5c0b99f35bce0" host="localhost" Jan 26 18:18:37.371588 containerd[1597]: 2026-01-26 18:18:37.347 [INFO][4395] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.7800c3819020f08f96d084286dbb81e76ea2059c5d7ee71e23e5c0b99f35bce0" host="localhost" Jan 26 18:18:37.371588 containerd[1597]: 2026-01-26 18:18:37.347 [INFO][4395] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.7800c3819020f08f96d084286dbb81e76ea2059c5d7ee71e23e5c0b99f35bce0" host="localhost" Jan 26 18:18:37.371588 containerd[1597]: 2026-01-26 18:18:37.347 [INFO][4395] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 26 18:18:37.371588 containerd[1597]: 2026-01-26 18:18:37.347 [INFO][4395] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="7800c3819020f08f96d084286dbb81e76ea2059c5d7ee71e23e5c0b99f35bce0" HandleID="k8s-pod-network.7800c3819020f08f96d084286dbb81e76ea2059c5d7ee71e23e5c0b99f35bce0" Workload="localhost-k8s-calico--kube--controllers--6df84d7cd--llmh4-eth0" Jan 26 18:18:37.372547 containerd[1597]: 2026-01-26 18:18:37.352 [INFO][4365] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7800c3819020f08f96d084286dbb81e76ea2059c5d7ee71e23e5c0b99f35bce0" Namespace="calico-system" Pod="calico-kube-controllers-6df84d7cd-llmh4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6df84d7cd--llmh4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6df84d7cd--llmh4-eth0", GenerateName:"calico-kube-controllers-6df84d7cd-", Namespace:"calico-system", SelfLink:"", UID:"abefa45c-cb84-41f4-b47b-49099e1244be", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2026, time.January, 26, 18, 18, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6df84d7cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6df84d7cd-llmh4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6f4fc4bb144", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 26 18:18:37.372547 containerd[1597]: 2026-01-26 18:18:37.352 [INFO][4365] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="7800c3819020f08f96d084286dbb81e76ea2059c5d7ee71e23e5c0b99f35bce0" Namespace="calico-system" Pod="calico-kube-controllers-6df84d7cd-llmh4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6df84d7cd--llmh4-eth0" Jan 26 18:18:37.372547 containerd[1597]: 2026-01-26 18:18:37.352 [INFO][4365] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6f4fc4bb144 ContainerID="7800c3819020f08f96d084286dbb81e76ea2059c5d7ee71e23e5c0b99f35bce0" Namespace="calico-system" Pod="calico-kube-controllers-6df84d7cd-llmh4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6df84d7cd--llmh4-eth0" Jan 26 18:18:37.372547 containerd[1597]: 2026-01-26 18:18:37.357 [INFO][4365] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7800c3819020f08f96d084286dbb81e76ea2059c5d7ee71e23e5c0b99f35bce0" Namespace="calico-system" Pod="calico-kube-controllers-6df84d7cd-llmh4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6df84d7cd--llmh4-eth0" Jan 26 18:18:37.372547 containerd[1597]: 2026-01-26 18:18:37.358 [INFO][4365] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7800c3819020f08f96d084286dbb81e76ea2059c5d7ee71e23e5c0b99f35bce0" Namespace="calico-system" Pod="calico-kube-controllers-6df84d7cd-llmh4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6df84d7cd--llmh4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6df84d7cd--llmh4-eth0", GenerateName:"calico-kube-controllers-6df84d7cd-", Namespace:"calico-system", SelfLink:"", UID:"abefa45c-cb84-41f4-b47b-49099e1244be", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2026, time.January, 26, 18, 18, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6df84d7cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7800c3819020f08f96d084286dbb81e76ea2059c5d7ee71e23e5c0b99f35bce0", Pod:"calico-kube-controllers-6df84d7cd-llmh4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6f4fc4bb144", MAC:"c2:d5:c6:b5:9d:1e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 26 18:18:37.372547 containerd[1597]: 2026-01-26 18:18:37.368 [INFO][4365] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7800c3819020f08f96d084286dbb81e76ea2059c5d7ee71e23e5c0b99f35bce0" Namespace="calico-system" Pod="calico-kube-controllers-6df84d7cd-llmh4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6df84d7cd--llmh4-eth0" Jan 26 18:18:37.385000 audit[4419]: NETFILTER_CFG table=filter:130 family=2 entries=40 op=nft_register_chain pid=4419 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 26 18:18:37.385000 audit[4419]: SYSCALL arch=c000003e syscall=46 success=yes exit=20764 a0=3 a1=7ffcacfc5fb0 a2=0 a3=7ffcacfc5f9c items=0 ppid=3990 pid=4419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:37.385000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 26 18:18:37.401407 containerd[1597]: time="2026-01-26T18:18:37.401348337Z" level=info msg="connecting to shim 7800c3819020f08f96d084286dbb81e76ea2059c5d7ee71e23e5c0b99f35bce0" address="unix:///run/containerd/s/5799b72c327d65cef9a64f61667294abf1cb9a7e4f8b791ae932a85892786470" namespace=k8s.io protocol=ttrpc version=3 Jan 26 18:18:37.434042 systemd[1]: Started cri-containerd-7800c3819020f08f96d084286dbb81e76ea2059c5d7ee71e23e5c0b99f35bce0.scope - libcontainer container 7800c3819020f08f96d084286dbb81e76ea2059c5d7ee71e23e5c0b99f35bce0. Jan 26 18:18:37.452000 audit: BPF prog-id=221 op=LOAD Jan 26 18:18:37.453000 audit: BPF prog-id=222 op=LOAD Jan 26 18:18:37.453000 audit[4440]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=4427 pid=4440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:37.453000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738303063333831393032306630386639366430383432383664626238 Jan 26 18:18:37.454000 audit: BPF prog-id=222 op=UNLOAD Jan 26 18:18:37.454000 audit[4440]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4427 pid=4440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:37.454000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738303063333831393032306630386639366430383432383664626238 Jan 26 18:18:37.454000 audit: BPF prog-id=223 op=LOAD Jan 26 18:18:37.454000 audit[4440]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=4427 pid=4440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:37.454000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738303063333831393032306630386639366430383432383664626238 Jan 26 18:18:37.454000 audit: BPF prog-id=224 op=LOAD Jan 26 18:18:37.454000 audit[4440]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=4427 pid=4440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:37.454000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738303063333831393032306630386639366430383432383664626238 Jan 26 18:18:37.454000 audit: BPF prog-id=224 op=UNLOAD Jan 26 18:18:37.454000 audit[4440]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4427 pid=4440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:37.454000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738303063333831393032306630386639366430383432383664626238 Jan 26 18:18:37.454000 audit: BPF prog-id=223 op=UNLOAD Jan 26 18:18:37.454000 audit[4440]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4427 pid=4440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:37.454000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738303063333831393032306630386639366430383432383664626238 Jan 26 18:18:37.454000 audit: BPF prog-id=225 op=LOAD Jan 26 18:18:37.454000 audit[4440]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=4427 pid=4440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:37.454000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738303063333831393032306630386639366430383432383664626238 Jan 26 18:18:37.457494 systemd-resolved[1289]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 26 18:18:37.464087 systemd-networkd[1518]: calic11b43ba5de: Link UP Jan 26 18:18:37.465080 systemd-networkd[1518]: calic11b43ba5de: Gained carrier Jan 26 18:18:37.480251 containerd[1597]: 2026-01-26 18:18:37.284 [INFO][4368] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--hc8sx-eth0 goldmane-666569f655- calico-system e2e25c6b-345b-4146-b644-2efa5b4232f3 827 0 2026-01-26 18:18:10 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-hc8sx eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic11b43ba5de [] [] }} ContainerID="a52b72c43065b0a8c67dd7ac5c74069f0ac5b12aed621438c7e212276965ea0f" Namespace="calico-system" Pod="goldmane-666569f655-hc8sx" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--hc8sx-" Jan 26 18:18:37.480251 containerd[1597]: 2026-01-26 18:18:37.285 [INFO][4368] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a52b72c43065b0a8c67dd7ac5c74069f0ac5b12aed621438c7e212276965ea0f" Namespace="calico-system" Pod="goldmane-666569f655-hc8sx" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--hc8sx-eth0" Jan 26 18:18:37.480251 containerd[1597]: 2026-01-26 18:18:37.325 [INFO][4397] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a52b72c43065b0a8c67dd7ac5c74069f0ac5b12aed621438c7e212276965ea0f" HandleID="k8s-pod-network.a52b72c43065b0a8c67dd7ac5c74069f0ac5b12aed621438c7e212276965ea0f" Workload="localhost-k8s-goldmane--666569f655--hc8sx-eth0" Jan 26 18:18:37.480251 containerd[1597]: 2026-01-26 18:18:37.325 [INFO][4397] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a52b72c43065b0a8c67dd7ac5c74069f0ac5b12aed621438c7e212276965ea0f" HandleID="k8s-pod-network.a52b72c43065b0a8c67dd7ac5c74069f0ac5b12aed621438c7e212276965ea0f" Workload="localhost-k8s-goldmane--666569f655--hc8sx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000442c70), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-hc8sx", "timestamp":"2026-01-26 18:18:37.32511795 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 26 18:18:37.480251 containerd[1597]: 2026-01-26 18:18:37.325 [INFO][4397] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 26 18:18:37.480251 containerd[1597]: 2026-01-26 18:18:37.347 [INFO][4397] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 26 18:18:37.480251 containerd[1597]: 2026-01-26 18:18:37.347 [INFO][4397] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 26 18:18:37.480251 containerd[1597]: 2026-01-26 18:18:37.423 [INFO][4397] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a52b72c43065b0a8c67dd7ac5c74069f0ac5b12aed621438c7e212276965ea0f" host="localhost" Jan 26 18:18:37.480251 containerd[1597]: 2026-01-26 18:18:37.431 [INFO][4397] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 26 18:18:37.480251 containerd[1597]: 2026-01-26 18:18:37.436 [INFO][4397] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 26 18:18:37.480251 containerd[1597]: 2026-01-26 18:18:37.439 [INFO][4397] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 26 18:18:37.480251 containerd[1597]: 2026-01-26 18:18:37.442 [INFO][4397] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 26 18:18:37.480251 containerd[1597]: 2026-01-26 18:18:37.442 [INFO][4397] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a52b72c43065b0a8c67dd7ac5c74069f0ac5b12aed621438c7e212276965ea0f" host="localhost" Jan 26 18:18:37.480251 containerd[1597]: 2026-01-26 18:18:37.443 [INFO][4397] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a52b72c43065b0a8c67dd7ac5c74069f0ac5b12aed621438c7e212276965ea0f Jan 26 18:18:37.480251 containerd[1597]: 2026-01-26 18:18:37.448 [INFO][4397] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a52b72c43065b0a8c67dd7ac5c74069f0ac5b12aed621438c7e212276965ea0f" host="localhost" Jan 26 18:18:37.480251 containerd[1597]: 2026-01-26 18:18:37.455 [INFO][4397] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.a52b72c43065b0a8c67dd7ac5c74069f0ac5b12aed621438c7e212276965ea0f" host="localhost" Jan 26 18:18:37.480251 containerd[1597]: 2026-01-26 18:18:37.455 [INFO][4397] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.a52b72c43065b0a8c67dd7ac5c74069f0ac5b12aed621438c7e212276965ea0f" host="localhost" Jan 26 18:18:37.480251 containerd[1597]: 2026-01-26 18:18:37.455 [INFO][4397] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 26 18:18:37.480251 containerd[1597]: 2026-01-26 18:18:37.455 [INFO][4397] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="a52b72c43065b0a8c67dd7ac5c74069f0ac5b12aed621438c7e212276965ea0f" HandleID="k8s-pod-network.a52b72c43065b0a8c67dd7ac5c74069f0ac5b12aed621438c7e212276965ea0f" Workload="localhost-k8s-goldmane--666569f655--hc8sx-eth0" Jan 26 18:18:37.480765 containerd[1597]: 2026-01-26 18:18:37.459 [INFO][4368] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a52b72c43065b0a8c67dd7ac5c74069f0ac5b12aed621438c7e212276965ea0f" Namespace="calico-system" Pod="goldmane-666569f655-hc8sx" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--hc8sx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--hc8sx-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"e2e25c6b-345b-4146-b644-2efa5b4232f3", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2026, time.January, 26, 18, 18, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-hc8sx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic11b43ba5de", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 26 18:18:37.480765 containerd[1597]: 2026-01-26 18:18:37.459 [INFO][4368] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="a52b72c43065b0a8c67dd7ac5c74069f0ac5b12aed621438c7e212276965ea0f" Namespace="calico-system" Pod="goldmane-666569f655-hc8sx" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--hc8sx-eth0" Jan 26 18:18:37.480765 containerd[1597]: 2026-01-26 18:18:37.459 [INFO][4368] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic11b43ba5de ContainerID="a52b72c43065b0a8c67dd7ac5c74069f0ac5b12aed621438c7e212276965ea0f" Namespace="calico-system" Pod="goldmane-666569f655-hc8sx" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--hc8sx-eth0" Jan 26 18:18:37.480765 containerd[1597]: 2026-01-26 18:18:37.465 [INFO][4368] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a52b72c43065b0a8c67dd7ac5c74069f0ac5b12aed621438c7e212276965ea0f" Namespace="calico-system" Pod="goldmane-666569f655-hc8sx" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--hc8sx-eth0" Jan 26 18:18:37.480765 containerd[1597]: 2026-01-26 18:18:37.466 [INFO][4368] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a52b72c43065b0a8c67dd7ac5c74069f0ac5b12aed621438c7e212276965ea0f" Namespace="calico-system" Pod="goldmane-666569f655-hc8sx" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--hc8sx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--hc8sx-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"e2e25c6b-345b-4146-b644-2efa5b4232f3", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2026, time.January, 26, 18, 18, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a52b72c43065b0a8c67dd7ac5c74069f0ac5b12aed621438c7e212276965ea0f", Pod:"goldmane-666569f655-hc8sx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic11b43ba5de", MAC:"0e:e4:51:2a:43:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 26 18:18:37.480765 containerd[1597]: 2026-01-26 18:18:37.477 [INFO][4368] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a52b72c43065b0a8c67dd7ac5c74069f0ac5b12aed621438c7e212276965ea0f" Namespace="calico-system" Pod="goldmane-666569f655-hc8sx" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--hc8sx-eth0" Jan 26 18:18:37.496000 audit[4469]: NETFILTER_CFG table=filter:131 family=2 entries=52 op=nft_register_chain pid=4469 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 26 18:18:37.496000 audit[4469]: SYSCALL arch=c000003e syscall=46 success=yes exit=27556 a0=3 a1=7fff950c7060 a2=0 a3=7fff950c704c items=0 ppid=3990 pid=4469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:37.496000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 26 18:18:37.512881 containerd[1597]: time="2026-01-26T18:18:37.512507368Z" level=info msg="connecting to shim a52b72c43065b0a8c67dd7ac5c74069f0ac5b12aed621438c7e212276965ea0f" address="unix:///run/containerd/s/781804812a784c9556505ea099ed8276718d3043311da983f9d53bedd4d05160" namespace=k8s.io protocol=ttrpc version=3 Jan 26 18:18:37.528086 containerd[1597]: time="2026-01-26T18:18:37.527923150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6df84d7cd-llmh4,Uid:abefa45c-cb84-41f4-b47b-49099e1244be,Namespace:calico-system,Attempt:0,} returns sandbox id \"7800c3819020f08f96d084286dbb81e76ea2059c5d7ee71e23e5c0b99f35bce0\"" Jan 26 18:18:37.531626 containerd[1597]: time="2026-01-26T18:18:37.531549944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 26 18:18:37.552045 systemd[1]: Started cri-containerd-a52b72c43065b0a8c67dd7ac5c74069f0ac5b12aed621438c7e212276965ea0f.scope - libcontainer container a52b72c43065b0a8c67dd7ac5c74069f0ac5b12aed621438c7e212276965ea0f. Jan 26 18:18:37.566000 audit: BPF prog-id=226 op=LOAD Jan 26 18:18:37.567000 audit: BPF prog-id=227 op=LOAD Jan 26 18:18:37.567000 audit[4495]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4478 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:37.567000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135326237326334333036356230613863363764643761633563373430 Jan 26 18:18:37.567000 audit: BPF prog-id=227 op=UNLOAD Jan 26 18:18:37.567000 audit[4495]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4478 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:37.567000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135326237326334333036356230613863363764643761633563373430 Jan 26 18:18:37.568000 audit: BPF prog-id=228 op=LOAD Jan 26 18:18:37.568000 audit[4495]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4478 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:37.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135326237326334333036356230613863363764643761633563373430 Jan 26 18:18:37.568000 audit: BPF prog-id=229 op=LOAD Jan 26 18:18:37.568000 audit[4495]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4478 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:37.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135326237326334333036356230613863363764643761633563373430 Jan 26 18:18:37.568000 audit: BPF prog-id=229 op=UNLOAD Jan 26 18:18:37.568000 audit[4495]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4478 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:37.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135326237326334333036356230613863363764643761633563373430 Jan 26 18:18:37.568000 audit: BPF prog-id=228 op=UNLOAD Jan 26 18:18:37.568000 audit[4495]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4478 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:37.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135326237326334333036356230613863363764643761633563373430 Jan 26 18:18:37.568000 audit: BPF prog-id=230 op=LOAD Jan 26 18:18:37.568000 audit[4495]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4478 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:37.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135326237326334333036356230613863363764643761633563373430 Jan 26 18:18:37.572051 systemd-resolved[1289]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 26 18:18:37.610029 containerd[1597]: time="2026-01-26T18:18:37.609988352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-hc8sx,Uid:e2e25c6b-345b-4146-b644-2efa5b4232f3,Namespace:calico-system,Attempt:0,} returns sandbox id \"a52b72c43065b0a8c67dd7ac5c74069f0ac5b12aed621438c7e212276965ea0f\"" Jan 26 18:18:37.626056 containerd[1597]: time="2026-01-26T18:18:37.625896090Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:18:37.627344 containerd[1597]: time="2026-01-26T18:18:37.627240304Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 26 18:18:37.627344 containerd[1597]: time="2026-01-26T18:18:37.627325672Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 26 18:18:37.627787 kubelet[2826]: E0126 18:18:37.627735 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 26 18:18:37.627787 kubelet[2826]: E0126 18:18:37.627784 2826 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 26 18:18:37.628550 kubelet[2826]: E0126 18:18:37.628354 2826 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j5cx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6df84d7cd-llmh4_calico-system(abefa45c-cb84-41f4-b47b-49099e1244be): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 26 18:18:37.629188 containerd[1597]: time="2026-01-26T18:18:37.628938629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 26 18:18:37.630617 kubelet[2826]: E0126 18:18:37.630503 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6df84d7cd-llmh4" podUID="abefa45c-cb84-41f4-b47b-49099e1244be" Jan 26 18:18:37.703567 containerd[1597]: time="2026-01-26T18:18:37.703499953Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:18:37.704875 containerd[1597]: time="2026-01-26T18:18:37.704775907Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 26 18:18:37.705029 containerd[1597]: time="2026-01-26T18:18:37.704809155Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 26 18:18:37.705061 kubelet[2826]: E0126 18:18:37.705040 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 26 18:18:37.705103 kubelet[2826]: E0126 18:18:37.705070 2826 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 26 18:18:37.705256 kubelet[2826]: E0126 18:18:37.705165 2826 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d2kbk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-hc8sx_calico-system(e2e25c6b-345b-4146-b644-2efa5b4232f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 26 18:18:37.706465 kubelet[2826]: E0126 18:18:37.706392 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hc8sx" podUID="e2e25c6b-345b-4146-b644-2efa5b4232f3" Jan 26 18:18:37.996545 kubelet[2826]: E0126 18:18:37.996471 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hc8sx" podUID="e2e25c6b-345b-4146-b644-2efa5b4232f3" Jan 26 18:18:38.009277 kubelet[2826]: E0126 18:18:38.009227 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:18:38.010728 kubelet[2826]: E0126 18:18:38.010567 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6df84d7cd-llmh4" podUID="abefa45c-cb84-41f4-b47b-49099e1244be" Jan 26 18:18:38.029000 audit[4522]: NETFILTER_CFG table=filter:132 family=2 entries=14 op=nft_register_rule pid=4522 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:18:38.029000 audit[4522]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc74cd0a40 a2=0 a3=7ffc74cd0a2c items=0 ppid=2938 pid=4522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:38.029000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:18:38.036000 audit[4522]: NETFILTER_CFG table=nat:133 family=2 entries=20 op=nft_register_rule pid=4522 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:18:38.036000 audit[4522]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc74cd0a40 a2=0 a3=7ffc74cd0a2c items=0 ppid=2938 pid=4522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:38.036000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:18:38.230887 kubelet[2826]: E0126 18:18:38.230770 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:18:38.232453 containerd[1597]: time="2026-01-26T18:18:38.232394131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6586m,Uid:85693ca0-4fb9-455c-bc62-47950c5b8df8,Namespace:kube-system,Attempt:0,}" Jan 26 18:18:38.232985 containerd[1597]: time="2026-01-26T18:18:38.232920188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dc747cc9d-68rtx,Uid:6c6db6ff-501e-4b31-91b6-e1eb34423484,Namespace:calico-apiserver,Attempt:0,}" Jan 26 18:18:38.251551 systemd-networkd[1518]: calid2d6368492e: Gained IPv6LL Jan 26 18:18:38.394437 systemd-networkd[1518]: cali5375bf20155: Link UP Jan 26 18:18:38.396280 systemd-networkd[1518]: cali5375bf20155: Gained carrier Jan 26 18:18:38.419467 containerd[1597]: 2026-01-26 18:18:38.294 [INFO][4523] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--6586m-eth0 coredns-668d6bf9bc- kube-system 85693ca0-4fb9-455c-bc62-47950c5b8df8 821 0 2026-01-26 18:17:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-6586m eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5375bf20155 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e91144ed3bdc40a4e5cdb7c0614b2c70620bb68f6f656d8f6f3fb926d7044b9d" Namespace="kube-system" Pod="coredns-668d6bf9bc-6586m" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6586m-" Jan 26 18:18:38.419467 containerd[1597]: 2026-01-26 18:18:38.294 [INFO][4523] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e91144ed3bdc40a4e5cdb7c0614b2c70620bb68f6f656d8f6f3fb926d7044b9d" Namespace="kube-system" Pod="coredns-668d6bf9bc-6586m" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6586m-eth0" Jan 26 18:18:38.419467 containerd[1597]: 2026-01-26 18:18:38.346 [INFO][4555] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e91144ed3bdc40a4e5cdb7c0614b2c70620bb68f6f656d8f6f3fb926d7044b9d" HandleID="k8s-pod-network.e91144ed3bdc40a4e5cdb7c0614b2c70620bb68f6f656d8f6f3fb926d7044b9d" Workload="localhost-k8s-coredns--668d6bf9bc--6586m-eth0" Jan 26 18:18:38.419467 containerd[1597]: 2026-01-26 18:18:38.347 [INFO][4555] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e91144ed3bdc40a4e5cdb7c0614b2c70620bb68f6f656d8f6f3fb926d7044b9d" HandleID="k8s-pod-network.e91144ed3bdc40a4e5cdb7c0614b2c70620bb68f6f656d8f6f3fb926d7044b9d" Workload="localhost-k8s-coredns--668d6bf9bc--6586m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000513470), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-6586m", "timestamp":"2026-01-26 18:18:38.346800471 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 26 18:18:38.419467 containerd[1597]: 2026-01-26 18:18:38.347 [INFO][4555] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 26 18:18:38.419467 containerd[1597]: 2026-01-26 18:18:38.347 [INFO][4555] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 26 18:18:38.419467 containerd[1597]: 2026-01-26 18:18:38.347 [INFO][4555] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 26 18:18:38.419467 containerd[1597]: 2026-01-26 18:18:38.354 [INFO][4555] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e91144ed3bdc40a4e5cdb7c0614b2c70620bb68f6f656d8f6f3fb926d7044b9d" host="localhost" Jan 26 18:18:38.419467 containerd[1597]: 2026-01-26 18:18:38.360 [INFO][4555] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 26 18:18:38.419467 containerd[1597]: 2026-01-26 18:18:38.365 [INFO][4555] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 26 18:18:38.419467 containerd[1597]: 2026-01-26 18:18:38.367 [INFO][4555] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 26 18:18:38.419467 containerd[1597]: 2026-01-26 18:18:38.370 [INFO][4555] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 26 18:18:38.419467 containerd[1597]: 2026-01-26 18:18:38.370 [INFO][4555] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e91144ed3bdc40a4e5cdb7c0614b2c70620bb68f6f656d8f6f3fb926d7044b9d" host="localhost" Jan 26 18:18:38.419467 containerd[1597]: 2026-01-26 18:18:38.371 [INFO][4555] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e91144ed3bdc40a4e5cdb7c0614b2c70620bb68f6f656d8f6f3fb926d7044b9d Jan 26 18:18:38.419467 containerd[1597]: 2026-01-26 18:18:38.377 [INFO][4555] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e91144ed3bdc40a4e5cdb7c0614b2c70620bb68f6f656d8f6f3fb926d7044b9d" host="localhost" Jan 26 18:18:38.419467 containerd[1597]: 2026-01-26 18:18:38.383 [INFO][4555] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.e91144ed3bdc40a4e5cdb7c0614b2c70620bb68f6f656d8f6f3fb926d7044b9d" host="localhost" Jan 26 18:18:38.419467 containerd[1597]: 2026-01-26 18:18:38.383 [INFO][4555] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.e91144ed3bdc40a4e5cdb7c0614b2c70620bb68f6f656d8f6f3fb926d7044b9d" host="localhost" Jan 26 18:18:38.419467 containerd[1597]: 2026-01-26 18:18:38.383 [INFO][4555] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 26 18:18:38.419467 containerd[1597]: 2026-01-26 18:18:38.383 [INFO][4555] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="e91144ed3bdc40a4e5cdb7c0614b2c70620bb68f6f656d8f6f3fb926d7044b9d" HandleID="k8s-pod-network.e91144ed3bdc40a4e5cdb7c0614b2c70620bb68f6f656d8f6f3fb926d7044b9d" Workload="localhost-k8s-coredns--668d6bf9bc--6586m-eth0" Jan 26 18:18:38.420407 containerd[1597]: 2026-01-26 18:18:38.389 [INFO][4523] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e91144ed3bdc40a4e5cdb7c0614b2c70620bb68f6f656d8f6f3fb926d7044b9d" Namespace="kube-system" Pod="coredns-668d6bf9bc-6586m" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6586m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--6586m-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"85693ca0-4fb9-455c-bc62-47950c5b8df8", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2026, time.January, 26, 18, 17, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-6586m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5375bf20155", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 26 18:18:38.420407 containerd[1597]: 2026-01-26 18:18:38.389 [INFO][4523] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="e91144ed3bdc40a4e5cdb7c0614b2c70620bb68f6f656d8f6f3fb926d7044b9d" Namespace="kube-system" Pod="coredns-668d6bf9bc-6586m" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6586m-eth0" Jan 26 18:18:38.420407 containerd[1597]: 2026-01-26 18:18:38.389 [INFO][4523] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5375bf20155 ContainerID="e91144ed3bdc40a4e5cdb7c0614b2c70620bb68f6f656d8f6f3fb926d7044b9d" Namespace="kube-system" Pod="coredns-668d6bf9bc-6586m" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6586m-eth0" Jan 26 18:18:38.420407 containerd[1597]: 2026-01-26 18:18:38.400 [INFO][4523] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e91144ed3bdc40a4e5cdb7c0614b2c70620bb68f6f656d8f6f3fb926d7044b9d" Namespace="kube-system" Pod="coredns-668d6bf9bc-6586m" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6586m-eth0" Jan 26 18:18:38.420407 containerd[1597]: 2026-01-26 18:18:38.403 [INFO][4523] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e91144ed3bdc40a4e5cdb7c0614b2c70620bb68f6f656d8f6f3fb926d7044b9d" Namespace="kube-system" Pod="coredns-668d6bf9bc-6586m" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6586m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--6586m-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"85693ca0-4fb9-455c-bc62-47950c5b8df8", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2026, time.January, 26, 18, 17, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e91144ed3bdc40a4e5cdb7c0614b2c70620bb68f6f656d8f6f3fb926d7044b9d", Pod:"coredns-668d6bf9bc-6586m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5375bf20155", MAC:"92:fb:b6:bf:0b:74", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 26 18:18:38.420407 containerd[1597]: 2026-01-26 18:18:38.415 [INFO][4523] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e91144ed3bdc40a4e5cdb7c0614b2c70620bb68f6f656d8f6f3fb926d7044b9d" Namespace="kube-system" Pod="coredns-668d6bf9bc-6586m" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6586m-eth0" Jan 26 18:18:38.435000 audit[4579]: NETFILTER_CFG table=filter:134 family=2 entries=44 op=nft_register_chain pid=4579 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 26 18:18:38.435000 audit[4579]: SYSCALL arch=c000003e syscall=46 success=yes exit=21532 a0=3 a1=7ffc8753a310 a2=0 a3=7ffc8753a2fc items=0 ppid=3990 pid=4579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:38.435000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 26 18:18:38.447336 containerd[1597]: time="2026-01-26T18:18:38.447278632Z" level=info msg="connecting to shim e91144ed3bdc40a4e5cdb7c0614b2c70620bb68f6f656d8f6f3fb926d7044b9d" address="unix:///run/containerd/s/d39c792e35a76bf0486d8dcb13c60b5adc4334b19f17f86347081c48bdf07c9e" namespace=k8s.io protocol=ttrpc version=3 Jan 26 18:18:38.495094 systemd[1]: Started cri-containerd-e91144ed3bdc40a4e5cdb7c0614b2c70620bb68f6f656d8f6f3fb926d7044b9d.scope - libcontainer container e91144ed3bdc40a4e5cdb7c0614b2c70620bb68f6f656d8f6f3fb926d7044b9d. Jan 26 18:18:38.497701 systemd-networkd[1518]: cali06ae1eaa792: Link UP Jan 26 18:18:38.499244 systemd-networkd[1518]: cali06ae1eaa792: Gained carrier Jan 26 18:18:38.504165 systemd-networkd[1518]: cali6f4fc4bb144: Gained IPv6LL Jan 26 18:18:38.520181 containerd[1597]: 2026-01-26 18:18:38.294 [INFO][4524] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6dc747cc9d--68rtx-eth0 calico-apiserver-6dc747cc9d- calico-apiserver 6c6db6ff-501e-4b31-91b6-e1eb34423484 829 0 2026-01-26 18:18:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6dc747cc9d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6dc747cc9d-68rtx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali06ae1eaa792 [] [] }} ContainerID="1a691cdf33335becf159518d9d584091db4a466f99b720fde7e393ca1485f794" Namespace="calico-apiserver" Pod="calico-apiserver-6dc747cc9d-68rtx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dc747cc9d--68rtx-" Jan 26 18:18:38.520181 containerd[1597]: 2026-01-26 18:18:38.295 [INFO][4524] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1a691cdf33335becf159518d9d584091db4a466f99b720fde7e393ca1485f794" Namespace="calico-apiserver" Pod="calico-apiserver-6dc747cc9d-68rtx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dc747cc9d--68rtx-eth0" Jan 26 18:18:38.520181 containerd[1597]: 2026-01-26 18:18:38.349 [INFO][4554] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a691cdf33335becf159518d9d584091db4a466f99b720fde7e393ca1485f794" HandleID="k8s-pod-network.1a691cdf33335becf159518d9d584091db4a466f99b720fde7e393ca1485f794" Workload="localhost-k8s-calico--apiserver--6dc747cc9d--68rtx-eth0" Jan 26 18:18:38.520181 containerd[1597]: 2026-01-26 18:18:38.349 [INFO][4554] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1a691cdf33335becf159518d9d584091db4a466f99b720fde7e393ca1485f794" HandleID="k8s-pod-network.1a691cdf33335becf159518d9d584091db4a466f99b720fde7e393ca1485f794" Workload="localhost-k8s-calico--apiserver--6dc747cc9d--68rtx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00050a9e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6dc747cc9d-68rtx", "timestamp":"2026-01-26 18:18:38.349223631 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 26 18:18:38.520181 containerd[1597]: 2026-01-26 18:18:38.349 [INFO][4554] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 26 18:18:38.520181 containerd[1597]: 2026-01-26 18:18:38.383 [INFO][4554] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 26 18:18:38.520181 containerd[1597]: 2026-01-26 18:18:38.383 [INFO][4554] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 26 18:18:38.520181 containerd[1597]: 2026-01-26 18:18:38.456 [INFO][4554] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1a691cdf33335becf159518d9d584091db4a466f99b720fde7e393ca1485f794" host="localhost" Jan 26 18:18:38.520181 containerd[1597]: 2026-01-26 18:18:38.464 [INFO][4554] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 26 18:18:38.520181 containerd[1597]: 2026-01-26 18:18:38.470 [INFO][4554] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 26 18:18:38.520181 containerd[1597]: 2026-01-26 18:18:38.472 [INFO][4554] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 26 18:18:38.520181 containerd[1597]: 2026-01-26 18:18:38.474 [INFO][4554] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 26 18:18:38.520181 containerd[1597]: 2026-01-26 18:18:38.474 [INFO][4554] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1a691cdf33335becf159518d9d584091db4a466f99b720fde7e393ca1485f794" host="localhost" Jan 26 18:18:38.520181 containerd[1597]: 2026-01-26 18:18:38.476 [INFO][4554] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1a691cdf33335becf159518d9d584091db4a466f99b720fde7e393ca1485f794 Jan 26 18:18:38.520181 containerd[1597]: 2026-01-26 18:18:38.481 [INFO][4554] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1a691cdf33335becf159518d9d584091db4a466f99b720fde7e393ca1485f794" host="localhost" Jan 26 18:18:38.520181 containerd[1597]: 2026-01-26 18:18:38.488 [INFO][4554] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.1a691cdf33335becf159518d9d584091db4a466f99b720fde7e393ca1485f794" host="localhost" Jan 26 18:18:38.520181 containerd[1597]: 2026-01-26 18:18:38.488 [INFO][4554] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.1a691cdf33335becf159518d9d584091db4a466f99b720fde7e393ca1485f794" host="localhost" Jan 26 18:18:38.520181 containerd[1597]: 2026-01-26 18:18:38.488 [INFO][4554] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 26 18:18:38.520181 containerd[1597]: 2026-01-26 18:18:38.488 [INFO][4554] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="1a691cdf33335becf159518d9d584091db4a466f99b720fde7e393ca1485f794" HandleID="k8s-pod-network.1a691cdf33335becf159518d9d584091db4a466f99b720fde7e393ca1485f794" Workload="localhost-k8s-calico--apiserver--6dc747cc9d--68rtx-eth0" Jan 26 18:18:38.521533 containerd[1597]: 2026-01-26 18:18:38.492 [INFO][4524] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1a691cdf33335becf159518d9d584091db4a466f99b720fde7e393ca1485f794" Namespace="calico-apiserver" Pod="calico-apiserver-6dc747cc9d-68rtx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dc747cc9d--68rtx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6dc747cc9d--68rtx-eth0", GenerateName:"calico-apiserver-6dc747cc9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"6c6db6ff-501e-4b31-91b6-e1eb34423484", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2026, time.January, 26, 18, 18, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dc747cc9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6dc747cc9d-68rtx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali06ae1eaa792", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 26 18:18:38.521533 containerd[1597]: 2026-01-26 18:18:38.492 [INFO][4524] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="1a691cdf33335becf159518d9d584091db4a466f99b720fde7e393ca1485f794" Namespace="calico-apiserver" Pod="calico-apiserver-6dc747cc9d-68rtx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dc747cc9d--68rtx-eth0" Jan 26 18:18:38.521533 containerd[1597]: 2026-01-26 18:18:38.492 [INFO][4524] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali06ae1eaa792 ContainerID="1a691cdf33335becf159518d9d584091db4a466f99b720fde7e393ca1485f794" Namespace="calico-apiserver" Pod="calico-apiserver-6dc747cc9d-68rtx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dc747cc9d--68rtx-eth0" Jan 26 18:18:38.521533 containerd[1597]: 2026-01-26 18:18:38.500 [INFO][4524] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1a691cdf33335becf159518d9d584091db4a466f99b720fde7e393ca1485f794" Namespace="calico-apiserver" Pod="calico-apiserver-6dc747cc9d-68rtx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dc747cc9d--68rtx-eth0" Jan 26 18:18:38.521533 containerd[1597]: 2026-01-26 18:18:38.502 [INFO][4524] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1a691cdf33335becf159518d9d584091db4a466f99b720fde7e393ca1485f794" Namespace="calico-apiserver" Pod="calico-apiserver-6dc747cc9d-68rtx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dc747cc9d--68rtx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6dc747cc9d--68rtx-eth0", GenerateName:"calico-apiserver-6dc747cc9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"6c6db6ff-501e-4b31-91b6-e1eb34423484", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2026, time.January, 26, 18, 18, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dc747cc9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1a691cdf33335becf159518d9d584091db4a466f99b720fde7e393ca1485f794", Pod:"calico-apiserver-6dc747cc9d-68rtx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali06ae1eaa792", MAC:"76:6c:33:82:48:7e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 26 18:18:38.521533 containerd[1597]: 2026-01-26 18:18:38.514 [INFO][4524] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1a691cdf33335becf159518d9d584091db4a466f99b720fde7e393ca1485f794" Namespace="calico-apiserver" Pod="calico-apiserver-6dc747cc9d-68rtx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dc747cc9d--68rtx-eth0" Jan 26 18:18:38.523000 audit: BPF prog-id=231 op=LOAD Jan 26 18:18:38.523000 audit: BPF prog-id=232 op=LOAD Jan 26 18:18:38.523000 audit[4598]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4588 pid=4598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:38.523000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539313134346564336264633430613465356364623763303631346232 Jan 26 18:18:38.523000 audit: BPF prog-id=232 op=UNLOAD Jan 26 18:18:38.523000 audit[4598]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4588 pid=4598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:38.523000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539313134346564336264633430613465356364623763303631346232 Jan 26 18:18:38.524000 audit: BPF prog-id=233 op=LOAD Jan 26 18:18:38.524000 audit[4598]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4588 pid=4598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:38.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539313134346564336264633430613465356364623763303631346232 Jan 26 18:18:38.524000 audit: BPF prog-id=234 op=LOAD Jan 26 18:18:38.524000 audit[4598]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4588 pid=4598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:38.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539313134346564336264633430613465356364623763303631346232 Jan 26 18:18:38.524000 audit: BPF prog-id=234 op=UNLOAD Jan 26 18:18:38.524000 audit[4598]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4588 pid=4598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:38.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539313134346564336264633430613465356364623763303631346232 Jan 26 18:18:38.524000 audit: BPF prog-id=233 op=UNLOAD Jan 26 18:18:38.524000 audit[4598]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4588 pid=4598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:38.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539313134346564336264633430613465356364623763303631346232 Jan 26 18:18:38.524000 audit: BPF prog-id=235 op=LOAD Jan 26 18:18:38.524000 audit[4598]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4588 pid=4598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:38.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539313134346564336264633430613465356364623763303631346232 Jan 26 18:18:38.531114 systemd-resolved[1289]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 26 18:18:38.568609 containerd[1597]: time="2026-01-26T18:18:38.568502769Z" level=info msg="connecting to shim 1a691cdf33335becf159518d9d584091db4a466f99b720fde7e393ca1485f794" address="unix:///run/containerd/s/ea1b49a2f4f8b7b74894b1592d3d9b7fe40c6c5be08025b6bcc57684307096e0" namespace=k8s.io protocol=ttrpc version=3 Jan 26 18:18:38.581000 audit[4655]: NETFILTER_CFG table=filter:135 family=2 entries=66 op=nft_register_chain pid=4655 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 26 18:18:38.581000 audit[4655]: SYSCALL arch=c000003e syscall=46 success=yes exit=32960 a0=3 a1=7ffec598de40 a2=0 a3=7ffec598de2c items=0 ppid=3990 pid=4655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:38.581000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 26 18:18:38.595280 containerd[1597]: time="2026-01-26T18:18:38.595211401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6586m,Uid:85693ca0-4fb9-455c-bc62-47950c5b8df8,Namespace:kube-system,Attempt:0,} returns sandbox id \"e91144ed3bdc40a4e5cdb7c0614b2c70620bb68f6f656d8f6f3fb926d7044b9d\"" Jan 26 18:18:38.596161 kubelet[2826]: E0126 18:18:38.596083 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:18:38.598530 containerd[1597]: time="2026-01-26T18:18:38.598464465Z" level=info msg="CreateContainer within sandbox \"e91144ed3bdc40a4e5cdb7c0614b2c70620bb68f6f656d8f6f3fb926d7044b9d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 26 18:18:38.610063 systemd[1]: Started cri-containerd-1a691cdf33335becf159518d9d584091db4a466f99b720fde7e393ca1485f794.scope - libcontainer container 1a691cdf33335becf159518d9d584091db4a466f99b720fde7e393ca1485f794. Jan 26 18:18:38.626644 containerd[1597]: time="2026-01-26T18:18:38.626551565Z" level=info msg="Container 66da26bfc79ead48dcf058fcc741ea5bd80f37069857476211498efcbdb473e4: CDI devices from CRI Config.CDIDevices: []" Jan 26 18:18:38.632000 audit: BPF prog-id=236 op=LOAD Jan 26 18:18:38.633000 audit: BPF prog-id=237 op=LOAD Jan 26 18:18:38.633000 audit[4654]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4639 pid=4654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:38.633000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161363931636466333333333562656366313539353138643964353834 Jan 26 18:18:38.633000 audit: BPF prog-id=237 op=UNLOAD Jan 26 18:18:38.633000 audit[4654]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4639 pid=4654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:38.633000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161363931636466333333333562656366313539353138643964353834 Jan 26 18:18:38.633000 audit: BPF prog-id=238 op=LOAD Jan 26 18:18:38.633000 audit[4654]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4639 pid=4654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:38.633000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161363931636466333333333562656366313539353138643964353834 Jan 26 18:18:38.633000 audit: BPF prog-id=239 op=LOAD Jan 26 18:18:38.633000 audit[4654]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4639 pid=4654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:38.633000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161363931636466333333333562656366313539353138643964353834 Jan 26 18:18:38.633000 audit: BPF prog-id=239 op=UNLOAD Jan 26 18:18:38.633000 audit[4654]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4639 pid=4654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:38.633000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161363931636466333333333562656366313539353138643964353834 Jan 26 18:18:38.633000 audit: BPF prog-id=238 op=UNLOAD Jan 26 18:18:38.633000 audit[4654]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4639 pid=4654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:38.633000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161363931636466333333333562656366313539353138643964353834 Jan 26 18:18:38.633000 audit: BPF prog-id=240 op=LOAD Jan 26 18:18:38.633000 audit[4654]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4639 pid=4654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:38.633000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161363931636466333333333562656366313539353138643964353834 Jan 26 18:18:38.636880 systemd-resolved[1289]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 26 18:18:38.637592 containerd[1597]: time="2026-01-26T18:18:38.637565787Z" level=info msg="CreateContainer within sandbox \"e91144ed3bdc40a4e5cdb7c0614b2c70620bb68f6f656d8f6f3fb926d7044b9d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"66da26bfc79ead48dcf058fcc741ea5bd80f37069857476211498efcbdb473e4\"" Jan 26 18:18:38.639894 containerd[1597]: time="2026-01-26T18:18:38.639279520Z" level=info msg="StartContainer for \"66da26bfc79ead48dcf058fcc741ea5bd80f37069857476211498efcbdb473e4\"" Jan 26 18:18:38.643077 containerd[1597]: time="2026-01-26T18:18:38.642756325Z" level=info msg="connecting to shim 66da26bfc79ead48dcf058fcc741ea5bd80f37069857476211498efcbdb473e4" address="unix:///run/containerd/s/d39c792e35a76bf0486d8dcb13c60b5adc4334b19f17f86347081c48bdf07c9e" protocol=ttrpc version=3 Jan 26 18:18:38.673247 systemd[1]: Started cri-containerd-66da26bfc79ead48dcf058fcc741ea5bd80f37069857476211498efcbdb473e4.scope - libcontainer container 66da26bfc79ead48dcf058fcc741ea5bd80f37069857476211498efcbdb473e4. Jan 26 18:18:38.687442 containerd[1597]: time="2026-01-26T18:18:38.687368458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dc747cc9d-68rtx,Uid:6c6db6ff-501e-4b31-91b6-e1eb34423484,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1a691cdf33335becf159518d9d584091db4a466f99b720fde7e393ca1485f794\"" Jan 26 18:18:38.689312 containerd[1597]: time="2026-01-26T18:18:38.689263898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 26 18:18:38.695000 audit: BPF prog-id=241 op=LOAD Jan 26 18:18:38.696000 audit: BPF prog-id=242 op=LOAD Jan 26 18:18:38.696000 audit[4682]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4588 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:38.696000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636646132366266633739656164343864636630353866636337343165 Jan 26 18:18:38.696000 audit: BPF prog-id=242 op=UNLOAD Jan 26 18:18:38.696000 audit[4682]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4588 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:38.696000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636646132366266633739656164343864636630353866636337343165 Jan 26 18:18:38.696000 audit: BPF prog-id=243 op=LOAD Jan 26 18:18:38.696000 audit[4682]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4588 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:38.696000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636646132366266633739656164343864636630353866636337343165 Jan 26 18:18:38.696000 audit: BPF prog-id=244 op=LOAD Jan 26 18:18:38.696000 audit[4682]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4588 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:38.696000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636646132366266633739656164343864636630353866636337343165 Jan 26 18:18:38.696000 audit: BPF prog-id=244 op=UNLOAD Jan 26 18:18:38.696000 audit[4682]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4588 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:38.696000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636646132366266633739656164343864636630353866636337343165 Jan 26 18:18:38.696000 audit: BPF prog-id=243 op=UNLOAD Jan 26 18:18:38.696000 audit[4682]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4588 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:38.696000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636646132366266633739656164343864636630353866636337343165 Jan 26 18:18:38.696000 audit: BPF prog-id=245 op=LOAD Jan 26 18:18:38.696000 audit[4682]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4588 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:38.696000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636646132366266633739656164343864636630353866636337343165 Jan 26 18:18:38.723224 containerd[1597]: time="2026-01-26T18:18:38.723194426Z" level=info msg="StartContainer for \"66da26bfc79ead48dcf058fcc741ea5bd80f37069857476211498efcbdb473e4\" returns successfully" Jan 26 18:18:38.821525 containerd[1597]: time="2026-01-26T18:18:38.821267984Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:18:38.822897 containerd[1597]: time="2026-01-26T18:18:38.822741628Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 26 18:18:38.823044 containerd[1597]: time="2026-01-26T18:18:38.823005839Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 26 18:18:38.823519 kubelet[2826]: E0126 18:18:38.823463 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 26 18:18:38.823963 kubelet[2826]: E0126 18:18:38.823579 2826 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 26 18:18:38.823963 kubelet[2826]: E0126 18:18:38.823802 2826 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-df5rz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6dc747cc9d-68rtx_calico-apiserver(6c6db6ff-501e-4b31-91b6-e1eb34423484): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 26 18:18:38.825784 kubelet[2826]: E0126 18:18:38.825696 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dc747cc9d-68rtx" podUID="6c6db6ff-501e-4b31-91b6-e1eb34423484" Jan 26 18:18:39.013649 kubelet[2826]: E0126 18:18:39.012972 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dc747cc9d-68rtx" podUID="6c6db6ff-501e-4b31-91b6-e1eb34423484" Jan 26 18:18:39.014651 kubelet[2826]: E0126 18:18:39.014583 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:18:39.015225 kubelet[2826]: E0126 18:18:39.015119 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hc8sx" podUID="e2e25c6b-345b-4146-b644-2efa5b4232f3" Jan 26 18:18:39.015735 kubelet[2826]: E0126 18:18:39.015376 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:18:39.016371 kubelet[2826]: E0126 18:18:39.016236 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6df84d7cd-llmh4" podUID="abefa45c-cb84-41f4-b47b-49099e1244be" Jan 26 18:18:39.039361 kubelet[2826]: I0126 18:18:39.039299 2826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6586m" podStartSLOduration=40.039280583 podStartE2EDuration="40.039280583s" podCreationTimestamp="2026-01-26 18:17:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:18:39.03636697 +0000 UTC m=+45.967227692" watchObservedRunningTime="2026-01-26 18:18:39.039280583 +0000 UTC m=+45.970141285" Jan 26 18:18:39.056000 audit[4725]: NETFILTER_CFG table=filter:136 family=2 entries=14 op=nft_register_rule pid=4725 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:18:39.056000 audit[4725]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffb268cf00 a2=0 a3=7fffb268ceec items=0 ppid=2938 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:39.056000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:18:39.072000 audit[4725]: NETFILTER_CFG table=nat:137 family=2 entries=20 op=nft_register_rule pid=4725 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:18:39.072000 audit[4725]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffb268cf00 a2=0 a3=7fffb268ceec items=0 ppid=2938 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:39.072000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:18:39.144146 systemd-networkd[1518]: calic11b43ba5de: Gained IPv6LL Jan 26 18:18:39.231274 containerd[1597]: time="2026-01-26T18:18:39.231217950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dc747cc9d-4d52w,Uid:2acaefc0-3f5b-4b21-902d-9d452b4924d3,Namespace:calico-apiserver,Attempt:0,}" Jan 26 18:18:39.232347 containerd[1597]: time="2026-01-26T18:18:39.232261243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x4jch,Uid:6589570b-6489-4043-b23c-e5a49733eb4e,Namespace:calico-system,Attempt:0,}" Jan 26 18:18:39.373484 systemd-networkd[1518]: cali61762f2c3b7: Link UP Jan 26 18:18:39.374293 systemd-networkd[1518]: cali61762f2c3b7: Gained carrier Jan 26 18:18:39.393512 containerd[1597]: 2026-01-26 18:18:39.283 [INFO][4726] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6dc747cc9d--4d52w-eth0 calico-apiserver-6dc747cc9d- calico-apiserver 2acaefc0-3f5b-4b21-902d-9d452b4924d3 828 0 2026-01-26 18:18:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6dc747cc9d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6dc747cc9d-4d52w eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali61762f2c3b7 [] [] }} ContainerID="0b3c5390a242375fa472b3f7e0de2f1eb9be9c20b1df5a77dcf67c4d9cf63b77" Namespace="calico-apiserver" Pod="calico-apiserver-6dc747cc9d-4d52w" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dc747cc9d--4d52w-" Jan 26 18:18:39.393512 containerd[1597]: 2026-01-26 18:18:39.283 [INFO][4726] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0b3c5390a242375fa472b3f7e0de2f1eb9be9c20b1df5a77dcf67c4d9cf63b77" Namespace="calico-apiserver" Pod="calico-apiserver-6dc747cc9d-4d52w" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dc747cc9d--4d52w-eth0" Jan 26 18:18:39.393512 containerd[1597]: 2026-01-26 18:18:39.325 [INFO][4754] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0b3c5390a242375fa472b3f7e0de2f1eb9be9c20b1df5a77dcf67c4d9cf63b77" HandleID="k8s-pod-network.0b3c5390a242375fa472b3f7e0de2f1eb9be9c20b1df5a77dcf67c4d9cf63b77" Workload="localhost-k8s-calico--apiserver--6dc747cc9d--4d52w-eth0" Jan 26 18:18:39.393512 containerd[1597]: 2026-01-26 18:18:39.325 [INFO][4754] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0b3c5390a242375fa472b3f7e0de2f1eb9be9c20b1df5a77dcf67c4d9cf63b77" HandleID="k8s-pod-network.0b3c5390a242375fa472b3f7e0de2f1eb9be9c20b1df5a77dcf67c4d9cf63b77" Workload="localhost-k8s-calico--apiserver--6dc747cc9d--4d52w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a4e30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6dc747cc9d-4d52w", "timestamp":"2026-01-26 18:18:39.325036256 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 26 18:18:39.393512 containerd[1597]: 2026-01-26 18:18:39.325 [INFO][4754] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 26 18:18:39.393512 containerd[1597]: 2026-01-26 18:18:39.325 [INFO][4754] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 26 18:18:39.393512 containerd[1597]: 2026-01-26 18:18:39.326 [INFO][4754] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 26 18:18:39.393512 containerd[1597]: 2026-01-26 18:18:39.332 [INFO][4754] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0b3c5390a242375fa472b3f7e0de2f1eb9be9c20b1df5a77dcf67c4d9cf63b77" host="localhost" Jan 26 18:18:39.393512 containerd[1597]: 2026-01-26 18:18:39.339 [INFO][4754] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 26 18:18:39.393512 containerd[1597]: 2026-01-26 18:18:39.344 [INFO][4754] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 26 18:18:39.393512 containerd[1597]: 2026-01-26 18:18:39.346 [INFO][4754] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 26 18:18:39.393512 containerd[1597]: 2026-01-26 18:18:39.348 [INFO][4754] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 26 18:18:39.393512 containerd[1597]: 2026-01-26 18:18:39.348 [INFO][4754] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0b3c5390a242375fa472b3f7e0de2f1eb9be9c20b1df5a77dcf67c4d9cf63b77" host="localhost" Jan 26 18:18:39.393512 containerd[1597]: 2026-01-26 18:18:39.350 [INFO][4754] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0b3c5390a242375fa472b3f7e0de2f1eb9be9c20b1df5a77dcf67c4d9cf63b77 Jan 26 18:18:39.393512 containerd[1597]: 2026-01-26 18:18:39.356 [INFO][4754] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0b3c5390a242375fa472b3f7e0de2f1eb9be9c20b1df5a77dcf67c4d9cf63b77" host="localhost" Jan 26 18:18:39.393512 containerd[1597]: 2026-01-26 18:18:39.366 [INFO][4754] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.0b3c5390a242375fa472b3f7e0de2f1eb9be9c20b1df5a77dcf67c4d9cf63b77" host="localhost" Jan 26 18:18:39.393512 containerd[1597]: 2026-01-26 18:18:39.366 [INFO][4754] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.0b3c5390a242375fa472b3f7e0de2f1eb9be9c20b1df5a77dcf67c4d9cf63b77" host="localhost" Jan 26 18:18:39.393512 containerd[1597]: 2026-01-26 18:18:39.366 [INFO][4754] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 26 18:18:39.393512 containerd[1597]: 2026-01-26 18:18:39.366 [INFO][4754] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="0b3c5390a242375fa472b3f7e0de2f1eb9be9c20b1df5a77dcf67c4d9cf63b77" HandleID="k8s-pod-network.0b3c5390a242375fa472b3f7e0de2f1eb9be9c20b1df5a77dcf67c4d9cf63b77" Workload="localhost-k8s-calico--apiserver--6dc747cc9d--4d52w-eth0" Jan 26 18:18:39.394335 containerd[1597]: 2026-01-26 18:18:39.369 [INFO][4726] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0b3c5390a242375fa472b3f7e0de2f1eb9be9c20b1df5a77dcf67c4d9cf63b77" Namespace="calico-apiserver" Pod="calico-apiserver-6dc747cc9d-4d52w" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dc747cc9d--4d52w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6dc747cc9d--4d52w-eth0", GenerateName:"calico-apiserver-6dc747cc9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"2acaefc0-3f5b-4b21-902d-9d452b4924d3", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2026, time.January, 26, 18, 18, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dc747cc9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6dc747cc9d-4d52w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali61762f2c3b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 26 18:18:39.394335 containerd[1597]: 2026-01-26 18:18:39.369 [INFO][4726] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="0b3c5390a242375fa472b3f7e0de2f1eb9be9c20b1df5a77dcf67c4d9cf63b77" Namespace="calico-apiserver" Pod="calico-apiserver-6dc747cc9d-4d52w" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dc747cc9d--4d52w-eth0" Jan 26 18:18:39.394335 containerd[1597]: 2026-01-26 18:18:39.369 [INFO][4726] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali61762f2c3b7 ContainerID="0b3c5390a242375fa472b3f7e0de2f1eb9be9c20b1df5a77dcf67c4d9cf63b77" Namespace="calico-apiserver" Pod="calico-apiserver-6dc747cc9d-4d52w" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dc747cc9d--4d52w-eth0" Jan 26 18:18:39.394335 containerd[1597]: 2026-01-26 18:18:39.376 [INFO][4726] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0b3c5390a242375fa472b3f7e0de2f1eb9be9c20b1df5a77dcf67c4d9cf63b77" Namespace="calico-apiserver" Pod="calico-apiserver-6dc747cc9d-4d52w" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dc747cc9d--4d52w-eth0" Jan 26 18:18:39.394335 containerd[1597]: 2026-01-26 18:18:39.377 [INFO][4726] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0b3c5390a242375fa472b3f7e0de2f1eb9be9c20b1df5a77dcf67c4d9cf63b77" Namespace="calico-apiserver" Pod="calico-apiserver-6dc747cc9d-4d52w" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dc747cc9d--4d52w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6dc747cc9d--4d52w-eth0", GenerateName:"calico-apiserver-6dc747cc9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"2acaefc0-3f5b-4b21-902d-9d452b4924d3", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2026, time.January, 26, 18, 18, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dc747cc9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0b3c5390a242375fa472b3f7e0de2f1eb9be9c20b1df5a77dcf67c4d9cf63b77", Pod:"calico-apiserver-6dc747cc9d-4d52w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali61762f2c3b7", MAC:"a2:8f:1d:19:28:b2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 26 18:18:39.394335 containerd[1597]: 2026-01-26 18:18:39.387 [INFO][4726] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0b3c5390a242375fa472b3f7e0de2f1eb9be9c20b1df5a77dcf67c4d9cf63b77" Namespace="calico-apiserver" Pod="calico-apiserver-6dc747cc9d-4d52w" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dc747cc9d--4d52w-eth0" Jan 26 18:18:39.406000 audit[4781]: NETFILTER_CFG table=filter:138 family=2 entries=63 op=nft_register_chain pid=4781 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 26 18:18:39.406000 audit[4781]: SYSCALL arch=c000003e syscall=46 success=yes exit=30680 a0=3 a1=7ffc61afe710 a2=0 a3=7ffc61afe6fc items=0 ppid=3990 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:39.406000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 26 18:18:39.425492 containerd[1597]: time="2026-01-26T18:18:39.425440615Z" level=info msg="connecting to shim 0b3c5390a242375fa472b3f7e0de2f1eb9be9c20b1df5a77dcf67c4d9cf63b77" address="unix:///run/containerd/s/8aac21969981dd5d2b33e5b4ee2ee41a1298d8417ce1b36649e62e2cd6de2130" namespace=k8s.io protocol=ttrpc version=3 Jan 26 18:18:39.460114 systemd[1]: Started cri-containerd-0b3c5390a242375fa472b3f7e0de2f1eb9be9c20b1df5a77dcf67c4d9cf63b77.scope - libcontainer container 0b3c5390a242375fa472b3f7e0de2f1eb9be9c20b1df5a77dcf67c4d9cf63b77. Jan 26 18:18:39.482256 systemd-networkd[1518]: cali7bfa294dafe: Link UP Jan 26 18:18:39.480000 audit: BPF prog-id=246 op=LOAD Jan 26 18:18:39.483511 systemd-networkd[1518]: cali7bfa294dafe: Gained carrier Jan 26 18:18:39.483000 audit: BPF prog-id=247 op=LOAD Jan 26 18:18:39.483000 audit[4801]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4790 pid=4801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:39.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062336335333930613234323337356661343732623366376530646532 Jan 26 18:18:39.483000 audit: BPF prog-id=247 op=UNLOAD Jan 26 18:18:39.483000 audit[4801]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4790 pid=4801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:39.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062336335333930613234323337356661343732623366376530646532 Jan 26 18:18:39.483000 audit: BPF prog-id=248 op=LOAD Jan 26 18:18:39.483000 audit[4801]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4790 pid=4801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:39.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062336335333930613234323337356661343732623366376530646532 Jan 26 18:18:39.483000 audit: BPF prog-id=249 op=LOAD Jan 26 18:18:39.483000 audit[4801]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4790 pid=4801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:39.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062336335333930613234323337356661343732623366376530646532 Jan 26 18:18:39.483000 audit: BPF prog-id=249 op=UNLOAD Jan 26 18:18:39.483000 audit[4801]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4790 pid=4801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:39.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062336335333930613234323337356661343732623366376530646532 Jan 26 18:18:39.483000 audit: BPF prog-id=248 op=UNLOAD Jan 26 18:18:39.483000 audit[4801]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4790 pid=4801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:39.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062336335333930613234323337356661343732623366376530646532 Jan 26 18:18:39.483000 audit: BPF prog-id=250 op=LOAD Jan 26 18:18:39.483000 audit[4801]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4790 pid=4801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:39.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062336335333930613234323337356661343732623366376530646532 Jan 26 18:18:39.486797 systemd-resolved[1289]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 26 18:18:39.508092 containerd[1597]: 2026-01-26 18:18:39.313 [INFO][4734] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--x4jch-eth0 csi-node-driver- calico-system 6589570b-6489-4043-b23c-e5a49733eb4e 706 0 2026-01-26 18:18:13 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-x4jch eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali7bfa294dafe [] [] }} ContainerID="f06aae1689760a7d6940630ff6b557dcbadd190581ff089890ecfe4846b599a4" Namespace="calico-system" Pod="csi-node-driver-x4jch" WorkloadEndpoint="localhost-k8s-csi--node--driver--x4jch-" Jan 26 18:18:39.508092 containerd[1597]: 2026-01-26 18:18:39.314 [INFO][4734] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f06aae1689760a7d6940630ff6b557dcbadd190581ff089890ecfe4846b599a4" Namespace="calico-system" Pod="csi-node-driver-x4jch" WorkloadEndpoint="localhost-k8s-csi--node--driver--x4jch-eth0" Jan 26 18:18:39.508092 containerd[1597]: 2026-01-26 18:18:39.359 [INFO][4764] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f06aae1689760a7d6940630ff6b557dcbadd190581ff089890ecfe4846b599a4" HandleID="k8s-pod-network.f06aae1689760a7d6940630ff6b557dcbadd190581ff089890ecfe4846b599a4" Workload="localhost-k8s-csi--node--driver--x4jch-eth0" Jan 26 18:18:39.508092 containerd[1597]: 2026-01-26 18:18:39.359 [INFO][4764] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f06aae1689760a7d6940630ff6b557dcbadd190581ff089890ecfe4846b599a4" HandleID="k8s-pod-network.f06aae1689760a7d6940630ff6b557dcbadd190581ff089890ecfe4846b599a4" Workload="localhost-k8s-csi--node--driver--x4jch-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ab7d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-x4jch", "timestamp":"2026-01-26 18:18:39.359375821 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 26 18:18:39.508092 containerd[1597]: 2026-01-26 18:18:39.359 [INFO][4764] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 26 18:18:39.508092 containerd[1597]: 2026-01-26 18:18:39.366 [INFO][4764] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 26 18:18:39.508092 containerd[1597]: 2026-01-26 18:18:39.366 [INFO][4764] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 26 18:18:39.508092 containerd[1597]: 2026-01-26 18:18:39.435 [INFO][4764] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f06aae1689760a7d6940630ff6b557dcbadd190581ff089890ecfe4846b599a4" host="localhost" Jan 26 18:18:39.508092 containerd[1597]: 2026-01-26 18:18:39.447 [INFO][4764] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 26 18:18:39.508092 containerd[1597]: 2026-01-26 18:18:39.453 [INFO][4764] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 26 18:18:39.508092 containerd[1597]: 2026-01-26 18:18:39.455 [INFO][4764] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 26 18:18:39.508092 containerd[1597]: 2026-01-26 18:18:39.459 [INFO][4764] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 26 18:18:39.508092 containerd[1597]: 2026-01-26 18:18:39.459 [INFO][4764] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f06aae1689760a7d6940630ff6b557dcbadd190581ff089890ecfe4846b599a4" host="localhost" Jan 26 18:18:39.508092 containerd[1597]: 2026-01-26 18:18:39.461 [INFO][4764] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f06aae1689760a7d6940630ff6b557dcbadd190581ff089890ecfe4846b599a4 Jan 26 18:18:39.508092 containerd[1597]: 2026-01-26 18:18:39.468 [INFO][4764] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f06aae1689760a7d6940630ff6b557dcbadd190581ff089890ecfe4846b599a4" host="localhost" Jan 26 18:18:39.508092 containerd[1597]: 2026-01-26 18:18:39.475 [INFO][4764] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.f06aae1689760a7d6940630ff6b557dcbadd190581ff089890ecfe4846b599a4" host="localhost" Jan 26 18:18:39.508092 containerd[1597]: 2026-01-26 18:18:39.475 [INFO][4764] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.f06aae1689760a7d6940630ff6b557dcbadd190581ff089890ecfe4846b599a4" host="localhost" Jan 26 18:18:39.508092 containerd[1597]: 2026-01-26 18:18:39.475 [INFO][4764] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 26 18:18:39.508092 containerd[1597]: 2026-01-26 18:18:39.475 [INFO][4764] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="f06aae1689760a7d6940630ff6b557dcbadd190581ff089890ecfe4846b599a4" HandleID="k8s-pod-network.f06aae1689760a7d6940630ff6b557dcbadd190581ff089890ecfe4846b599a4" Workload="localhost-k8s-csi--node--driver--x4jch-eth0" Jan 26 18:18:39.508744 containerd[1597]: 2026-01-26 18:18:39.479 [INFO][4734] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f06aae1689760a7d6940630ff6b557dcbadd190581ff089890ecfe4846b599a4" Namespace="calico-system" Pod="csi-node-driver-x4jch" WorkloadEndpoint="localhost-k8s-csi--node--driver--x4jch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--x4jch-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6589570b-6489-4043-b23c-e5a49733eb4e", ResourceVersion:"706", Generation:0, CreationTimestamp:time.Date(2026, time.January, 26, 18, 18, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-x4jch", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7bfa294dafe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 26 18:18:39.508744 containerd[1597]: 2026-01-26 18:18:39.479 [INFO][4734] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="f06aae1689760a7d6940630ff6b557dcbadd190581ff089890ecfe4846b599a4" Namespace="calico-system" Pod="csi-node-driver-x4jch" WorkloadEndpoint="localhost-k8s-csi--node--driver--x4jch-eth0" Jan 26 18:18:39.508744 containerd[1597]: 2026-01-26 18:18:39.479 [INFO][4734] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7bfa294dafe ContainerID="f06aae1689760a7d6940630ff6b557dcbadd190581ff089890ecfe4846b599a4" Namespace="calico-system" Pod="csi-node-driver-x4jch" WorkloadEndpoint="localhost-k8s-csi--node--driver--x4jch-eth0" Jan 26 18:18:39.508744 containerd[1597]: 2026-01-26 18:18:39.484 [INFO][4734] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f06aae1689760a7d6940630ff6b557dcbadd190581ff089890ecfe4846b599a4" Namespace="calico-system" Pod="csi-node-driver-x4jch" WorkloadEndpoint="localhost-k8s-csi--node--driver--x4jch-eth0" Jan 26 18:18:39.508744 containerd[1597]: 2026-01-26 18:18:39.485 [INFO][4734] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f06aae1689760a7d6940630ff6b557dcbadd190581ff089890ecfe4846b599a4" Namespace="calico-system" Pod="csi-node-driver-x4jch" WorkloadEndpoint="localhost-k8s-csi--node--driver--x4jch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--x4jch-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6589570b-6489-4043-b23c-e5a49733eb4e", ResourceVersion:"706", Generation:0, CreationTimestamp:time.Date(2026, time.January, 26, 18, 18, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f06aae1689760a7d6940630ff6b557dcbadd190581ff089890ecfe4846b599a4", Pod:"csi-node-driver-x4jch", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7bfa294dafe", MAC:"1e:76:81:2c:ee:75", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 26 18:18:39.508744 containerd[1597]: 2026-01-26 18:18:39.504 [INFO][4734] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f06aae1689760a7d6940630ff6b557dcbadd190581ff089890ecfe4846b599a4" Namespace="calico-system" Pod="csi-node-driver-x4jch" WorkloadEndpoint="localhost-k8s-csi--node--driver--x4jch-eth0" Jan 26 18:18:39.519000 audit[4831]: NETFILTER_CFG table=filter:139 family=2 entries=56 op=nft_register_chain pid=4831 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 26 18:18:39.519000 audit[4831]: SYSCALL arch=c000003e syscall=46 success=yes exit=25500 a0=3 a1=7fff75d10cb0 a2=0 a3=7fff75d10c9c items=0 ppid=3990 pid=4831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:39.519000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 26 18:18:39.539128 containerd[1597]: time="2026-01-26T18:18:39.538938046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dc747cc9d-4d52w,Uid:2acaefc0-3f5b-4b21-902d-9d452b4924d3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0b3c5390a242375fa472b3f7e0de2f1eb9be9c20b1df5a77dcf67c4d9cf63b77\"" Jan 26 18:18:39.539316 containerd[1597]: time="2026-01-26T18:18:39.539267388Z" level=info msg="connecting to shim f06aae1689760a7d6940630ff6b557dcbadd190581ff089890ecfe4846b599a4" address="unix:///run/containerd/s/d517de74ca680d2d83117fd0c021e1b2d691c5a0ce30ef4a0c40d325eb976767" namespace=k8s.io protocol=ttrpc version=3 Jan 26 18:18:39.541182 containerd[1597]: time="2026-01-26T18:18:39.541145856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 26 18:18:39.584171 systemd[1]: Started cri-containerd-f06aae1689760a7d6940630ff6b557dcbadd190581ff089890ecfe4846b599a4.scope - libcontainer container f06aae1689760a7d6940630ff6b557dcbadd190581ff089890ecfe4846b599a4. Jan 26 18:18:39.595000 audit: BPF prog-id=251 op=LOAD Jan 26 18:18:39.596000 audit: BPF prog-id=252 op=LOAD Jan 26 18:18:39.596000 audit[4858]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4846 pid=4858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:39.596000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630366161653136383937363061376436393430363330666636623535 Jan 26 18:18:39.596000 audit: BPF prog-id=252 op=UNLOAD Jan 26 18:18:39.596000 audit[4858]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4846 pid=4858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:39.596000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630366161653136383937363061376436393430363330666636623535 Jan 26 18:18:39.596000 audit: BPF prog-id=253 op=LOAD Jan 26 18:18:39.596000 audit[4858]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4846 pid=4858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:39.596000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630366161653136383937363061376436393430363330666636623535 Jan 26 18:18:39.596000 audit: BPF prog-id=254 op=LOAD Jan 26 18:18:39.596000 audit[4858]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4846 pid=4858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:39.596000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630366161653136383937363061376436393430363330666636623535 Jan 26 18:18:39.596000 audit: BPF prog-id=254 op=UNLOAD Jan 26 18:18:39.596000 audit[4858]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4846 pid=4858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:39.596000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630366161653136383937363061376436393430363330666636623535 Jan 26 18:18:39.596000 audit: BPF prog-id=253 op=UNLOAD Jan 26 18:18:39.596000 audit[4858]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4846 pid=4858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:39.596000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630366161653136383937363061376436393430363330666636623535 Jan 26 18:18:39.596000 audit: BPF prog-id=255 op=LOAD Jan 26 18:18:39.596000 audit[4858]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4846 pid=4858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:39.596000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630366161653136383937363061376436393430363330666636623535 Jan 26 18:18:39.599238 systemd-resolved[1289]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 26 18:18:39.605907 containerd[1597]: time="2026-01-26T18:18:39.605768917Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:18:39.607901 containerd[1597]: time="2026-01-26T18:18:39.607787947Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 26 18:18:39.607946 containerd[1597]: time="2026-01-26T18:18:39.607928137Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 26 18:18:39.608608 kubelet[2826]: E0126 18:18:39.608541 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 26 18:18:39.608690 kubelet[2826]: E0126 18:18:39.608610 2826 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 26 18:18:39.609089 kubelet[2826]: E0126 18:18:39.609055 2826 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xv6jh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6dc747cc9d-4d52w_calico-apiserver(2acaefc0-3f5b-4b21-902d-9d452b4924d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 26 18:18:39.610980 kubelet[2826]: E0126 18:18:39.610942 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dc747cc9d-4d52w" podUID="2acaefc0-3f5b-4b21-902d-9d452b4924d3" Jan 26 18:18:39.619516 containerd[1597]: time="2026-01-26T18:18:39.619455636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x4jch,Uid:6589570b-6489-4043-b23c-e5a49733eb4e,Namespace:calico-system,Attempt:0,} returns sandbox id \"f06aae1689760a7d6940630ff6b557dcbadd190581ff089890ecfe4846b599a4\"" Jan 26 18:18:39.622318 containerd[1597]: time="2026-01-26T18:18:39.622221759Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 26 18:18:39.689108 containerd[1597]: time="2026-01-26T18:18:39.688938533Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:18:39.690439 containerd[1597]: time="2026-01-26T18:18:39.690339745Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 26 18:18:39.690551 containerd[1597]: time="2026-01-26T18:18:39.690508033Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 26 18:18:39.690788 kubelet[2826]: E0126 18:18:39.690716 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 26 18:18:39.690788 kubelet[2826]: E0126 18:18:39.690781 2826 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 26 18:18:39.691037 kubelet[2826]: E0126 18:18:39.690982 2826 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s9st6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-x4jch_calico-system(6589570b-6489-4043-b23c-e5a49733eb4e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 26 18:18:39.693908 containerd[1597]: time="2026-01-26T18:18:39.693786130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 26 18:18:39.721157 systemd-networkd[1518]: cali5375bf20155: Gained IPv6LL Jan 26 18:18:39.756617 containerd[1597]: time="2026-01-26T18:18:39.756422366Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:18:39.758122 containerd[1597]: time="2026-01-26T18:18:39.757897667Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 26 18:18:39.758122 containerd[1597]: time="2026-01-26T18:18:39.757973903Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 26 18:18:39.758272 kubelet[2826]: E0126 18:18:39.758179 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 26 18:18:39.758272 kubelet[2826]: E0126 18:18:39.758231 2826 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 26 18:18:39.758408 kubelet[2826]: E0126 18:18:39.758336 2826 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s9st6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-x4jch_calico-system(6589570b-6489-4043-b23c-e5a49733eb4e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 26 18:18:39.759696 kubelet[2826]: E0126 18:18:39.759606 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x4jch" podUID="6589570b-6489-4043-b23c-e5a49733eb4e" Jan 26 18:18:40.019347 kubelet[2826]: E0126 18:18:40.019182 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dc747cc9d-4d52w" podUID="2acaefc0-3f5b-4b21-902d-9d452b4924d3" Jan 26 18:18:40.021164 kubelet[2826]: E0126 18:18:40.020556 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:18:40.021451 kubelet[2826]: E0126 18:18:40.021394 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dc747cc9d-68rtx" podUID="6c6db6ff-501e-4b31-91b6-e1eb34423484" Jan 26 18:18:40.023117 kubelet[2826]: E0126 18:18:40.023089 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x4jch" podUID="6589570b-6489-4043-b23c-e5a49733eb4e" Jan 26 18:18:40.051000 audit[4882]: NETFILTER_CFG table=filter:140 family=2 entries=14 op=nft_register_rule pid=4882 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:18:40.051000 audit[4882]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc2fbb2a50 a2=0 a3=7ffc2fbb2a3c items=0 ppid=2938 pid=4882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:40.051000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:18:40.059000 audit[4882]: NETFILTER_CFG table=nat:141 family=2 entries=44 op=nft_register_rule pid=4882 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:18:40.059000 audit[4882]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc2fbb2a50 a2=0 a3=7ffc2fbb2a3c items=0 ppid=2938 pid=4882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:40.059000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:18:40.168035 systemd-networkd[1518]: cali06ae1eaa792: Gained IPv6LL Jan 26 18:18:40.424318 systemd-networkd[1518]: cali61762f2c3b7: Gained IPv6LL Jan 26 18:18:40.808156 systemd-networkd[1518]: cali7bfa294dafe: Gained IPv6LL Jan 26 18:18:40.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.64:22-10.0.0.1:40968 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:18:40.836669 systemd[1]: Started sshd@10-10.0.0.64:22-10.0.0.1:40968.service - OpenSSH per-connection server daemon (10.0.0.1:40968). Jan 26 18:18:40.839395 kernel: kauditd_printk_skb: 242 callbacks suppressed Jan 26 18:18:40.839494 kernel: audit: type=1130 audit(1769451520.834:747): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.64:22-10.0.0.1:40968 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:18:40.944000 audit[4884]: USER_ACCT pid=4884 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:40.946280 sshd[4884]: Accepted publickey for core from 10.0.0.1 port 40968 ssh2: RSA SHA256:FBsBDqAn2CSSEioDgYmIdR2sMG3PuCcnBTUzWqQs7BE Jan 26 18:18:40.948766 sshd-session[4884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:18:40.954325 systemd-logind[1579]: New session 12 of user core. Jan 26 18:18:40.945000 audit[4884]: CRED_ACQ pid=4884 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:40.962909 kernel: audit: type=1101 audit(1769451520.944:748): pid=4884 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:40.962970 kernel: audit: type=1103 audit(1769451520.945:749): pid=4884 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:40.962988 kernel: audit: type=1006 audit(1769451520.945:750): pid=4884 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 26 18:18:40.945000 audit[4884]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffda366f3f0 a2=3 a3=0 items=0 ppid=1 pid=4884 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:40.977182 kernel: audit: type=1300 audit(1769451520.945:750): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffda366f3f0 a2=3 a3=0 items=0 ppid=1 pid=4884 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:40.945000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:18:40.981432 kernel: audit: type=1327 audit(1769451520.945:750): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:18:40.984096 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 26 18:18:40.985000 audit[4884]: USER_START pid=4884 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:40.987000 audit[4888]: CRED_ACQ pid=4888 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:41.007734 kernel: audit: type=1105 audit(1769451520.985:751): pid=4884 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:41.007789 kernel: audit: type=1103 audit(1769451520.987:752): pid=4888 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:41.030365 kubelet[2826]: E0126 18:18:41.030299 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:18:41.034774 kubelet[2826]: E0126 18:18:41.033967 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dc747cc9d-4d52w" podUID="2acaefc0-3f5b-4b21-902d-9d452b4924d3" Jan 26 18:18:41.034774 kubelet[2826]: E0126 18:18:41.032793 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x4jch" podUID="6589570b-6489-4043-b23c-e5a49733eb4e" Jan 26 18:18:41.120000 audit[4900]: NETFILTER_CFG table=filter:142 family=2 entries=14 op=nft_register_rule pid=4900 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:18:41.131867 kernel: audit: type=1325 audit(1769451521.120:753): table=filter:142 family=2 entries=14 op=nft_register_rule pid=4900 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:18:41.120000 audit[4900]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffed00e2090 a2=0 a3=7ffed00e207c items=0 ppid=2938 pid=4900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:41.139095 sshd[4888]: Connection closed by 10.0.0.1 port 40968 Jan 26 18:18:41.140168 sshd-session[4884]: pam_unix(sshd:session): session closed for user core Jan 26 18:18:41.120000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:18:41.141000 audit[4884]: USER_END pid=4884 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:41.141000 audit[4884]: CRED_DISP pid=4884 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:41.148860 kernel: audit: type=1300 audit(1769451521.120:753): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffed00e2090 a2=0 a3=7ffed00e207c items=0 ppid=2938 pid=4900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:41.149147 systemd[1]: sshd@10-10.0.0.64:22-10.0.0.1:40968.service: Deactivated successfully. Jan 26 18:18:41.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.64:22-10.0.0.1:40968 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:18:41.152636 systemd[1]: session-12.scope: Deactivated successfully. Jan 26 18:18:41.154936 systemd-logind[1579]: Session 12 logged out. Waiting for processes to exit. Jan 26 18:18:41.157011 systemd-logind[1579]: Removed session 12. Jan 26 18:18:41.158000 audit[4900]: NETFILTER_CFG table=nat:143 family=2 entries=56 op=nft_register_chain pid=4900 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:18:41.158000 audit[4900]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffed00e2090 a2=0 a3=7ffed00e207c items=0 ppid=2938 pid=4900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:41.158000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:18:42.028352 kubelet[2826]: E0126 18:18:42.028310 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:18:42.247415 kubelet[2826]: I0126 18:18:42.247199 2826 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 18:18:42.248425 kubelet[2826]: E0126 18:18:42.248237 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:18:43.030757 kubelet[2826]: E0126 18:18:43.030697 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:18:45.232072 containerd[1597]: time="2026-01-26T18:18:45.232030239Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 26 18:18:45.311238 containerd[1597]: time="2026-01-26T18:18:45.311103388Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:18:45.312559 containerd[1597]: time="2026-01-26T18:18:45.312517556Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 26 18:18:45.312668 containerd[1597]: time="2026-01-26T18:18:45.312591911Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 26 18:18:45.312867 kubelet[2826]: E0126 18:18:45.312734 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 26 18:18:45.313202 kubelet[2826]: E0126 18:18:45.312882 2826 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 26 18:18:45.313202 kubelet[2826]: E0126 18:18:45.313009 2826 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:84b790fb7a3241c08a1d443333567276,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bgncg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-bf8465869-rt8sq_calico-system(7dfdaf1a-1fc3-4ffb-9232-005b9874f7f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 26 18:18:45.315165 containerd[1597]: time="2026-01-26T18:18:45.315048252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 26 18:18:45.381847 containerd[1597]: time="2026-01-26T18:18:45.381717414Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:18:45.383559 containerd[1597]: time="2026-01-26T18:18:45.383438303Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 26 18:18:45.383617 containerd[1597]: time="2026-01-26T18:18:45.383521489Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 26 18:18:45.383962 kubelet[2826]: E0126 18:18:45.383899 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 26 18:18:45.384017 kubelet[2826]: E0126 18:18:45.383985 2826 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 26 18:18:45.384239 kubelet[2826]: E0126 18:18:45.384167 2826 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bgncg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-bf8465869-rt8sq_calico-system(7dfdaf1a-1fc3-4ffb-9232-005b9874f7f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 26 18:18:45.385571 kubelet[2826]: E0126 18:18:45.385470 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-bf8465869-rt8sq" podUID="7dfdaf1a-1fc3-4ffb-9232-005b9874f7f8" Jan 26 18:18:46.156104 systemd[1]: Started sshd@11-10.0.0.64:22-10.0.0.1:40546.service - OpenSSH per-connection server daemon (10.0.0.1:40546). Jan 26 18:18:46.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.64:22-10.0.0.1:40546 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:18:46.158077 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 26 18:18:46.158157 kernel: audit: type=1130 audit(1769451526.155:758): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.64:22-10.0.0.1:40546 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:18:46.232000 audit[4968]: USER_ACCT pid=4968 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:46.233765 sshd[4968]: Accepted publickey for core from 10.0.0.1 port 40546 ssh2: RSA SHA256:FBsBDqAn2CSSEioDgYmIdR2sMG3PuCcnBTUzWqQs7BE Jan 26 18:18:46.242000 audit[4968]: CRED_ACQ pid=4968 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:46.245773 sshd-session[4968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:18:46.253153 systemd-logind[1579]: New session 13 of user core. Jan 26 18:18:46.255295 kernel: audit: type=1101 audit(1769451526.232:759): pid=4968 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:46.255352 kernel: audit: type=1103 audit(1769451526.242:760): pid=4968 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:46.255378 kernel: audit: type=1006 audit(1769451526.242:761): pid=4968 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 26 18:18:46.242000 audit[4968]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb7bf6630 a2=3 a3=0 items=0 ppid=1 pid=4968 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:46.271210 kernel: audit: type=1300 audit(1769451526.242:761): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb7bf6630 a2=3 a3=0 items=0 ppid=1 pid=4968 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:46.271321 kernel: audit: type=1327 audit(1769451526.242:761): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:18:46.242000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:18:46.287181 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 26 18:18:46.289000 audit[4968]: USER_START pid=4968 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:46.293000 audit[4972]: CRED_ACQ pid=4972 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:46.315522 kernel: audit: type=1105 audit(1769451526.289:762): pid=4968 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:46.315597 kernel: audit: type=1103 audit(1769451526.293:763): pid=4972 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:46.395949 sshd[4972]: Connection closed by 10.0.0.1 port 40546 Jan 26 18:18:46.399063 sshd-session[4968]: pam_unix(sshd:session): session closed for user core Jan 26 18:18:46.400000 audit[4968]: USER_END pid=4968 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:46.400000 audit[4968]: CRED_DISP pid=4968 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:46.422084 systemd[1]: sshd@11-10.0.0.64:22-10.0.0.1:40546.service: Deactivated successfully. Jan 26 18:18:46.425468 systemd[1]: session-13.scope: Deactivated successfully. Jan 26 18:18:46.427490 systemd-logind[1579]: Session 13 logged out. Waiting for processes to exit. Jan 26 18:18:46.428012 kernel: audit: type=1106 audit(1769451526.400:764): pid=4968 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:46.428070 kernel: audit: type=1104 audit(1769451526.400:765): pid=4968 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:46.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.64:22-10.0.0.1:40546 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:18:46.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.64:22-10.0.0.1:40548 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:18:46.433469 systemd[1]: Started sshd@12-10.0.0.64:22-10.0.0.1:40548.service - OpenSSH per-connection server daemon (10.0.0.1:40548). Jan 26 18:18:46.436215 systemd-logind[1579]: Removed session 13. Jan 26 18:18:46.508000 audit[4987]: USER_ACCT pid=4987 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:46.509517 sshd[4987]: Accepted publickey for core from 10.0.0.1 port 40548 ssh2: RSA SHA256:FBsBDqAn2CSSEioDgYmIdR2sMG3PuCcnBTUzWqQs7BE Jan 26 18:18:46.510000 audit[4987]: CRED_ACQ pid=4987 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:46.510000 audit[4987]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd889f6030 a2=3 a3=0 items=0 ppid=1 pid=4987 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:46.510000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:18:46.512569 sshd-session[4987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:18:46.519641 systemd-logind[1579]: New session 14 of user core. Jan 26 18:18:46.527147 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 26 18:18:46.531000 audit[4987]: USER_START pid=4987 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:46.532000 audit[4991]: CRED_ACQ pid=4991 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:46.663679 sshd[4991]: Connection closed by 10.0.0.1 port 40548 Jan 26 18:18:46.664301 sshd-session[4987]: pam_unix(sshd:session): session closed for user core Jan 26 18:18:46.665000 audit[4987]: USER_END pid=4987 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:46.666000 audit[4987]: CRED_DISP pid=4987 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:46.680655 systemd[1]: sshd@12-10.0.0.64:22-10.0.0.1:40548.service: Deactivated successfully. Jan 26 18:18:46.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.64:22-10.0.0.1:40548 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:18:46.687636 systemd[1]: session-14.scope: Deactivated successfully. Jan 26 18:18:46.692446 systemd-logind[1579]: Session 14 logged out. Waiting for processes to exit. Jan 26 18:18:46.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.64:22-10.0.0.1:40550 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:18:46.696548 systemd[1]: Started sshd@13-10.0.0.64:22-10.0.0.1:40550.service - OpenSSH per-connection server daemon (10.0.0.1:40550). Jan 26 18:18:46.700107 systemd-logind[1579]: Removed session 14. Jan 26 18:18:46.751000 audit[5002]: USER_ACCT pid=5002 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:46.754132 sshd[5002]: Accepted publickey for core from 10.0.0.1 port 40550 ssh2: RSA SHA256:FBsBDqAn2CSSEioDgYmIdR2sMG3PuCcnBTUzWqQs7BE Jan 26 18:18:46.753000 audit[5002]: CRED_ACQ pid=5002 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:46.753000 audit[5002]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec67d7430 a2=3 a3=0 items=0 ppid=1 pid=5002 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:46.753000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:18:46.756145 sshd-session[5002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:18:46.762523 systemd-logind[1579]: New session 15 of user core. Jan 26 18:18:46.774380 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 26 18:18:46.776000 audit[5002]: USER_START pid=5002 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:46.779000 audit[5006]: CRED_ACQ pid=5006 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:46.863619 sshd[5006]: Connection closed by 10.0.0.1 port 40550 Jan 26 18:18:46.864147 sshd-session[5002]: pam_unix(sshd:session): session closed for user core Jan 26 18:18:46.865000 audit[5002]: USER_END pid=5002 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:46.865000 audit[5002]: CRED_DISP pid=5002 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:46.869540 systemd[1]: sshd@13-10.0.0.64:22-10.0.0.1:40550.service: Deactivated successfully. Jan 26 18:18:46.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.64:22-10.0.0.1:40550 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:18:46.872127 systemd[1]: session-15.scope: Deactivated successfully. Jan 26 18:18:46.874543 systemd-logind[1579]: Session 15 logged out. Waiting for processes to exit. Jan 26 18:18:46.877023 systemd-logind[1579]: Removed session 15. Jan 26 18:18:51.230878 containerd[1597]: time="2026-01-26T18:18:51.230741340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 26 18:18:51.341551 containerd[1597]: time="2026-01-26T18:18:51.341375267Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:18:51.342895 containerd[1597]: time="2026-01-26T18:18:51.342686956Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 26 18:18:51.342895 containerd[1597]: time="2026-01-26T18:18:51.342813962Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 26 18:18:51.343078 kubelet[2826]: E0126 18:18:51.343044 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 26 18:18:51.343501 kubelet[2826]: E0126 18:18:51.343416 2826 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 26 18:18:51.343761 kubelet[2826]: E0126 18:18:51.343668 2826 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j5cx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6df84d7cd-llmh4_calico-system(abefa45c-cb84-41f4-b47b-49099e1244be): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 26 18:18:51.344011 containerd[1597]: time="2026-01-26T18:18:51.343974882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 26 18:18:51.345578 kubelet[2826]: E0126 18:18:51.345508 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6df84d7cd-llmh4" podUID="abefa45c-cb84-41f4-b47b-49099e1244be" Jan 26 18:18:51.460092 containerd[1597]: time="2026-01-26T18:18:51.460012631Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:18:51.461298 containerd[1597]: time="2026-01-26T18:18:51.461213585Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 26 18:18:51.461298 containerd[1597]: time="2026-01-26T18:18:51.461255338Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 26 18:18:51.461443 kubelet[2826]: E0126 18:18:51.461411 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 26 18:18:51.461546 kubelet[2826]: E0126 18:18:51.461453 2826 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 26 18:18:51.461725 kubelet[2826]: E0126 18:18:51.461584 2826 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d2kbk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-hc8sx_calico-system(e2e25c6b-345b-4146-b644-2efa5b4232f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 26 18:18:51.463278 kubelet[2826]: E0126 18:18:51.463222 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hc8sx" podUID="e2e25c6b-345b-4146-b644-2efa5b4232f3" Jan 26 18:18:51.879657 systemd[1]: Started sshd@14-10.0.0.64:22-10.0.0.1:40562.service - OpenSSH per-connection server daemon (10.0.0.1:40562). Jan 26 18:18:51.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.64:22-10.0.0.1:40562 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:18:51.882560 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 26 18:18:51.882692 kernel: audit: type=1130 audit(1769451531.879:785): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.64:22-10.0.0.1:40562 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:18:51.948000 audit[5025]: USER_ACCT pid=5025 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:51.950025 sshd[5025]: Accepted publickey for core from 10.0.0.1 port 40562 ssh2: RSA SHA256:FBsBDqAn2CSSEioDgYmIdR2sMG3PuCcnBTUzWqQs7BE Jan 26 18:18:51.953103 sshd-session[5025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:18:51.959635 systemd-logind[1579]: New session 16 of user core. Jan 26 18:18:51.950000 audit[5025]: CRED_ACQ pid=5025 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:51.971946 kernel: audit: type=1101 audit(1769451531.948:786): pid=5025 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:51.971997 kernel: audit: type=1103 audit(1769451531.950:787): pid=5025 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:51.972023 kernel: audit: type=1006 audit(1769451531.950:788): pid=5025 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 26 18:18:51.950000 audit[5025]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc7fd50c70 a2=3 a3=0 items=0 ppid=1 pid=5025 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:51.987077 kernel: audit: type=1300 audit(1769451531.950:788): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc7fd50c70 a2=3 a3=0 items=0 ppid=1 pid=5025 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:51.987108 kernel: audit: type=1327 audit(1769451531.950:788): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:18:51.950000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:18:51.992039 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 26 18:18:51.993000 audit[5025]: USER_START pid=5025 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:52.006888 kernel: audit: type=1105 audit(1769451531.993:789): pid=5025 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:52.006983 kernel: audit: type=1103 audit(1769451531.996:790): pid=5029 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:51.996000 audit[5029]: CRED_ACQ pid=5029 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:52.083597 sshd[5029]: Connection closed by 10.0.0.1 port 40562 Jan 26 18:18:52.084053 sshd-session[5025]: pam_unix(sshd:session): session closed for user core Jan 26 18:18:52.085000 audit[5025]: USER_END pid=5025 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:52.089897 systemd[1]: sshd@14-10.0.0.64:22-10.0.0.1:40562.service: Deactivated successfully. Jan 26 18:18:52.091954 systemd[1]: session-16.scope: Deactivated successfully. Jan 26 18:18:52.093036 systemd-logind[1579]: Session 16 logged out. Waiting for processes to exit. Jan 26 18:18:52.094547 systemd-logind[1579]: Removed session 16. Jan 26 18:18:52.085000 audit[5025]: CRED_DISP pid=5025 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:52.104270 kernel: audit: type=1106 audit(1769451532.085:791): pid=5025 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:52.104324 kernel: audit: type=1104 audit(1769451532.085:792): pid=5025 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:52.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.64:22-10.0.0.1:40562 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:18:54.232164 containerd[1597]: time="2026-01-26T18:18:54.232094623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 26 18:18:54.350491 containerd[1597]: time="2026-01-26T18:18:54.350388097Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:18:54.351897 containerd[1597]: time="2026-01-26T18:18:54.351760350Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 26 18:18:54.351897 containerd[1597]: time="2026-01-26T18:18:54.351912457Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 26 18:18:54.352187 kubelet[2826]: E0126 18:18:54.352035 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 26 18:18:54.352187 kubelet[2826]: E0126 18:18:54.352111 2826 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 26 18:18:54.352612 kubelet[2826]: E0126 18:18:54.352254 2826 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-df5rz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6dc747cc9d-68rtx_calico-apiserver(6c6db6ff-501e-4b31-91b6-e1eb34423484): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 26 18:18:54.353579 kubelet[2826]: E0126 18:18:54.353534 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dc747cc9d-68rtx" podUID="6c6db6ff-501e-4b31-91b6-e1eb34423484" Jan 26 18:18:56.231190 containerd[1597]: time="2026-01-26T18:18:56.231056456Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 26 18:18:56.303530 containerd[1597]: time="2026-01-26T18:18:56.303366556Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:18:56.305013 containerd[1597]: time="2026-01-26T18:18:56.304808439Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 26 18:18:56.305013 containerd[1597]: time="2026-01-26T18:18:56.304965541Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 26 18:18:56.305201 kubelet[2826]: E0126 18:18:56.305156 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 26 18:18:56.305880 kubelet[2826]: E0126 18:18:56.305214 2826 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 26 18:18:56.305880 kubelet[2826]: E0126 18:18:56.305429 2826 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s9st6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-x4jch_calico-system(6589570b-6489-4043-b23c-e5a49733eb4e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 26 18:18:56.306016 containerd[1597]: time="2026-01-26T18:18:56.305499685Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 26 18:18:56.366409 containerd[1597]: time="2026-01-26T18:18:56.366327038Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:18:56.368018 containerd[1597]: time="2026-01-26T18:18:56.367967305Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 26 18:18:56.368082 containerd[1597]: time="2026-01-26T18:18:56.368054767Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 26 18:18:56.368294 kubelet[2826]: E0126 18:18:56.368237 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 26 18:18:56.368294 kubelet[2826]: E0126 18:18:56.368283 2826 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 26 18:18:56.368544 kubelet[2826]: E0126 18:18:56.368490 2826 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xv6jh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6dc747cc9d-4d52w_calico-apiserver(2acaefc0-3f5b-4b21-902d-9d452b4924d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 26 18:18:56.369543 containerd[1597]: time="2026-01-26T18:18:56.369449842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 26 18:18:56.370188 kubelet[2826]: E0126 18:18:56.370002 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dc747cc9d-4d52w" podUID="2acaefc0-3f5b-4b21-902d-9d452b4924d3" Jan 26 18:18:56.443593 containerd[1597]: time="2026-01-26T18:18:56.443495169Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:18:56.444999 containerd[1597]: time="2026-01-26T18:18:56.444935639Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 26 18:18:56.445046 containerd[1597]: time="2026-01-26T18:18:56.445027680Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 26 18:18:56.445333 kubelet[2826]: E0126 18:18:56.445238 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 26 18:18:56.445389 kubelet[2826]: E0126 18:18:56.445329 2826 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 26 18:18:56.445624 kubelet[2826]: E0126 18:18:56.445489 2826 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s9st6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-x4jch_calico-system(6589570b-6489-4043-b23c-e5a49733eb4e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 26 18:18:56.447150 kubelet[2826]: E0126 18:18:56.446990 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x4jch" podUID="6589570b-6489-4043-b23c-e5a49733eb4e" Jan 26 18:18:57.096964 systemd[1]: Started sshd@15-10.0.0.64:22-10.0.0.1:53600.service - OpenSSH per-connection server daemon (10.0.0.1:53600). Jan 26 18:18:57.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.64:22-10.0.0.1:53600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:18:57.100898 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 26 18:18:57.101016 kernel: audit: type=1130 audit(1769451537.096:794): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.64:22-10.0.0.1:53600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:18:57.170000 audit[5051]: USER_ACCT pid=5051 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:57.171513 sshd[5051]: Accepted publickey for core from 10.0.0.1 port 53600 ssh2: RSA SHA256:FBsBDqAn2CSSEioDgYmIdR2sMG3PuCcnBTUzWqQs7BE Jan 26 18:18:57.173451 sshd-session[5051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:18:57.179641 systemd-logind[1579]: New session 17 of user core. Jan 26 18:18:57.171000 audit[5051]: CRED_ACQ pid=5051 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:57.193884 kernel: audit: type=1101 audit(1769451537.170:795): pid=5051 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:57.193930 kernel: audit: type=1103 audit(1769451537.171:796): pid=5051 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:57.193961 kernel: audit: type=1006 audit(1769451537.171:797): pid=5051 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 26 18:18:57.171000 audit[5051]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc8d53de10 a2=3 a3=0 items=0 ppid=1 pid=5051 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:57.210513 kernel: audit: type=1300 audit(1769451537.171:797): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc8d53de10 a2=3 a3=0 items=0 ppid=1 pid=5051 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:18:57.210566 kernel: audit: type=1327 audit(1769451537.171:797): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:18:57.171000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:18:57.219398 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 26 18:18:57.223000 audit[5051]: USER_START pid=5051 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:57.239897 kernel: audit: type=1105 audit(1769451537.223:798): pid=5051 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:57.226000 audit[5055]: CRED_ACQ pid=5055 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:57.252914 kernel: audit: type=1103 audit(1769451537.226:799): pid=5055 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:57.326344 sshd[5055]: Connection closed by 10.0.0.1 port 53600 Jan 26 18:18:57.326704 sshd-session[5051]: pam_unix(sshd:session): session closed for user core Jan 26 18:18:57.327000 audit[5051]: USER_END pid=5051 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:57.332356 systemd[1]: sshd@15-10.0.0.64:22-10.0.0.1:53600.service: Deactivated successfully. Jan 26 18:18:57.335148 systemd[1]: session-17.scope: Deactivated successfully. Jan 26 18:18:57.338305 systemd-logind[1579]: Session 17 logged out. Waiting for processes to exit. Jan 26 18:18:57.339684 systemd-logind[1579]: Removed session 17. Jan 26 18:18:57.328000 audit[5051]: CRED_DISP pid=5051 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:57.355801 kernel: audit: type=1106 audit(1769451537.327:800): pid=5051 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:57.355999 kernel: audit: type=1104 audit(1769451537.328:801): pid=5051 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:18:57.331000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.64:22-10.0.0.1:53600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:19:00.230727 kubelet[2826]: E0126 18:19:00.230641 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:19:00.232615 kubelet[2826]: E0126 18:19:00.232554 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-bf8465869-rt8sq" podUID="7dfdaf1a-1fc3-4ffb-9232-005b9874f7f8" Jan 26 18:19:02.232132 kubelet[2826]: E0126 18:19:02.232014 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hc8sx" podUID="e2e25c6b-345b-4146-b644-2efa5b4232f3" Jan 26 18:19:02.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.64:22-10.0.0.1:53616 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:19:02.347195 systemd[1]: Started sshd@16-10.0.0.64:22-10.0.0.1:53616.service - OpenSSH per-connection server daemon (10.0.0.1:53616). Jan 26 18:19:02.350898 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 26 18:19:02.350962 kernel: audit: type=1130 audit(1769451542.346:803): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.64:22-10.0.0.1:53616 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:19:02.448000 audit[5072]: USER_ACCT pid=5072 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:02.449380 sshd[5072]: Accepted publickey for core from 10.0.0.1 port 53616 ssh2: RSA SHA256:FBsBDqAn2CSSEioDgYmIdR2sMG3PuCcnBTUzWqQs7BE Jan 26 18:19:02.452090 sshd-session[5072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:19:02.459322 systemd-logind[1579]: New session 18 of user core. Jan 26 18:19:02.449000 audit[5072]: CRED_ACQ pid=5072 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:02.476409 kernel: audit: type=1101 audit(1769451542.448:804): pid=5072 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:02.476549 kernel: audit: type=1103 audit(1769451542.449:805): pid=5072 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:02.476588 kernel: audit: type=1006 audit(1769451542.449:806): pid=5072 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 26 18:19:02.449000 audit[5072]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff27cf1e0 a2=3 a3=0 items=0 ppid=1 pid=5072 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:19:02.498337 kernel: audit: type=1300 audit(1769451542.449:806): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff27cf1e0 a2=3 a3=0 items=0 ppid=1 pid=5072 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:19:02.498415 kernel: audit: type=1327 audit(1769451542.449:806): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:19:02.449000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:19:02.505389 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 26 18:19:02.509000 audit[5072]: USER_START pid=5072 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:02.511000 audit[5076]: CRED_ACQ pid=5076 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:02.537026 kernel: audit: type=1105 audit(1769451542.509:807): pid=5072 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:02.537176 kernel: audit: type=1103 audit(1769451542.511:808): pid=5076 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:02.620553 sshd[5076]: Connection closed by 10.0.0.1 port 53616 Jan 26 18:19:02.620968 sshd-session[5072]: pam_unix(sshd:session): session closed for user core Jan 26 18:19:02.621000 audit[5072]: USER_END pid=5072 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:02.625103 systemd[1]: sshd@16-10.0.0.64:22-10.0.0.1:53616.service: Deactivated successfully. Jan 26 18:19:02.627911 systemd[1]: session-18.scope: Deactivated successfully. Jan 26 18:19:02.630371 systemd-logind[1579]: Session 18 logged out. Waiting for processes to exit. Jan 26 18:19:02.632335 systemd-logind[1579]: Removed session 18. Jan 26 18:19:02.621000 audit[5072]: CRED_DISP pid=5072 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:02.643661 kernel: audit: type=1106 audit(1769451542.621:809): pid=5072 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:02.643722 kernel: audit: type=1104 audit(1769451542.621:810): pid=5072 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:02.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.64:22-10.0.0.1:53616 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:19:04.232230 kubelet[2826]: E0126 18:19:04.232086 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6df84d7cd-llmh4" podUID="abefa45c-cb84-41f4-b47b-49099e1244be" Jan 26 18:19:07.230774 kubelet[2826]: E0126 18:19:07.230619 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:19:07.232425 kubelet[2826]: E0126 18:19:07.232366 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dc747cc9d-68rtx" podUID="6c6db6ff-501e-4b31-91b6-e1eb34423484" Jan 26 18:19:07.636388 systemd[1]: Started sshd@17-10.0.0.64:22-10.0.0.1:32782.service - OpenSSH per-connection server daemon (10.0.0.1:32782). Jan 26 18:19:07.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.64:22-10.0.0.1:32782 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:19:07.638506 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 26 18:19:07.638549 kernel: audit: type=1130 audit(1769451547.635:812): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.64:22-10.0.0.1:32782 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:19:07.696000 audit[5092]: USER_ACCT pid=5092 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:07.697733 sshd[5092]: Accepted publickey for core from 10.0.0.1 port 32782 ssh2: RSA SHA256:FBsBDqAn2CSSEioDgYmIdR2sMG3PuCcnBTUzWqQs7BE Jan 26 18:19:07.700678 sshd-session[5092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:19:07.696000 audit[5092]: CRED_ACQ pid=5092 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:07.708164 systemd-logind[1579]: New session 19 of user core. Jan 26 18:19:07.715070 kernel: audit: type=1101 audit(1769451547.696:813): pid=5092 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:07.715180 kernel: audit: type=1103 audit(1769451547.696:814): pid=5092 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:07.715302 kernel: audit: type=1006 audit(1769451547.696:815): pid=5092 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 26 18:19:07.696000 audit[5092]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe7b534940 a2=3 a3=0 items=0 ppid=1 pid=5092 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:19:07.729383 kernel: audit: type=1300 audit(1769451547.696:815): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe7b534940 a2=3 a3=0 items=0 ppid=1 pid=5092 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:19:07.729436 kernel: audit: type=1327 audit(1769451547.696:815): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:19:07.696000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:19:07.740272 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 26 18:19:07.743000 audit[5092]: USER_START pid=5092 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:07.756865 kernel: audit: type=1105 audit(1769451547.743:816): pid=5092 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:07.756924 kernel: audit: type=1103 audit(1769451547.746:817): pid=5096 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:07.746000 audit[5096]: CRED_ACQ pid=5096 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:07.828076 sshd[5096]: Connection closed by 10.0.0.1 port 32782 Jan 26 18:19:07.828639 sshd-session[5092]: pam_unix(sshd:session): session closed for user core Jan 26 18:19:07.829000 audit[5092]: USER_END pid=5092 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:07.829000 audit[5092]: CRED_DISP pid=5092 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:07.848321 kernel: audit: type=1106 audit(1769451547.829:818): pid=5092 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:07.848368 kernel: audit: type=1104 audit(1769451547.829:819): pid=5092 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:07.853485 systemd[1]: sshd@17-10.0.0.64:22-10.0.0.1:32782.service: Deactivated successfully. Jan 26 18:19:07.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.64:22-10.0.0.1:32782 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:19:07.855489 systemd[1]: session-19.scope: Deactivated successfully. Jan 26 18:19:07.856800 systemd-logind[1579]: Session 19 logged out. Waiting for processes to exit. Jan 26 18:19:07.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.64:22-10.0.0.1:32796 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:19:07.859986 systemd[1]: Started sshd@18-10.0.0.64:22-10.0.0.1:32796.service - OpenSSH per-connection server daemon (10.0.0.1:32796). Jan 26 18:19:07.860812 systemd-logind[1579]: Removed session 19. Jan 26 18:19:07.922000 audit[5110]: USER_ACCT pid=5110 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:07.923524 sshd[5110]: Accepted publickey for core from 10.0.0.1 port 32796 ssh2: RSA SHA256:FBsBDqAn2CSSEioDgYmIdR2sMG3PuCcnBTUzWqQs7BE Jan 26 18:19:07.923000 audit[5110]: CRED_ACQ pid=5110 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:07.924000 audit[5110]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdcf564110 a2=3 a3=0 items=0 ppid=1 pid=5110 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:19:07.924000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:19:07.926175 sshd-session[5110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:19:07.932163 systemd-logind[1579]: New session 20 of user core. Jan 26 18:19:07.942023 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 26 18:19:07.943000 audit[5110]: USER_START pid=5110 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:07.945000 audit[5114]: CRED_ACQ pid=5114 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:08.162233 sshd[5114]: Connection closed by 10.0.0.1 port 32796 Jan 26 18:19:08.163308 sshd-session[5110]: pam_unix(sshd:session): session closed for user core Jan 26 18:19:08.165000 audit[5110]: USER_END pid=5110 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:08.165000 audit[5110]: CRED_DISP pid=5110 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:08.174405 systemd[1]: sshd@18-10.0.0.64:22-10.0.0.1:32796.service: Deactivated successfully. Jan 26 18:19:08.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.64:22-10.0.0.1:32796 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:19:08.176937 systemd[1]: session-20.scope: Deactivated successfully. Jan 26 18:19:08.178256 systemd-logind[1579]: Session 20 logged out. Waiting for processes to exit. Jan 26 18:19:08.181034 systemd[1]: Started sshd@19-10.0.0.64:22-10.0.0.1:32806.service - OpenSSH per-connection server daemon (10.0.0.1:32806). Jan 26 18:19:08.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.64:22-10.0.0.1:32806 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:19:08.181708 systemd-logind[1579]: Removed session 20. Jan 26 18:19:08.231161 kubelet[2826]: E0126 18:19:08.231097 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:19:08.232044 kubelet[2826]: E0126 18:19:08.231974 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dc747cc9d-4d52w" podUID="2acaefc0-3f5b-4b21-902d-9d452b4924d3" Jan 26 18:19:08.262000 audit[5125]: USER_ACCT pid=5125 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:08.263786 sshd[5125]: Accepted publickey for core from 10.0.0.1 port 32806 ssh2: RSA SHA256:FBsBDqAn2CSSEioDgYmIdR2sMG3PuCcnBTUzWqQs7BE Jan 26 18:19:08.263000 audit[5125]: CRED_ACQ pid=5125 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:08.263000 audit[5125]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc4c31f060 a2=3 a3=0 items=0 ppid=1 pid=5125 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:19:08.263000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:19:08.266003 sshd-session[5125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:19:08.272547 systemd-logind[1579]: New session 21 of user core. Jan 26 18:19:08.284040 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 26 18:19:08.286000 audit[5125]: USER_START pid=5125 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:08.288000 audit[5129]: CRED_ACQ pid=5129 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:08.817000 audit[5142]: NETFILTER_CFG table=filter:144 family=2 entries=26 op=nft_register_rule pid=5142 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:19:08.817000 audit[5142]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffd8cf0ddf0 a2=0 a3=7ffd8cf0dddc items=0 ppid=2938 pid=5142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:19:08.817000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:19:08.822000 audit[5142]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=5142 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:19:08.822000 audit[5142]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd8cf0ddf0 a2=0 a3=0 items=0 ppid=2938 pid=5142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:19:08.822000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:19:08.830139 sshd[5129]: Connection closed by 10.0.0.1 port 32806 Jan 26 18:19:08.830437 sshd-session[5125]: pam_unix(sshd:session): session closed for user core Jan 26 18:19:08.835000 audit[5125]: USER_END pid=5125 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:08.835000 audit[5125]: CRED_DISP pid=5125 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:08.844049 systemd[1]: sshd@19-10.0.0.64:22-10.0.0.1:32806.service: Deactivated successfully. Jan 26 18:19:08.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.64:22-10.0.0.1:32806 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:19:08.849218 systemd[1]: session-21.scope: Deactivated successfully. Jan 26 18:19:08.852135 systemd-logind[1579]: Session 21 logged out. Waiting for processes to exit. Jan 26 18:19:08.848000 audit[5145]: NETFILTER_CFG table=filter:146 family=2 entries=38 op=nft_register_rule pid=5145 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:19:08.848000 audit[5145]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe7009f570 a2=0 a3=7ffe7009f55c items=0 ppid=2938 pid=5145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:19:08.848000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:19:08.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.64:22-10.0.0.1:32818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:19:08.855025 systemd[1]: Started sshd@20-10.0.0.64:22-10.0.0.1:32818.service - OpenSSH per-connection server daemon (10.0.0.1:32818). Jan 26 18:19:08.857130 systemd-logind[1579]: Removed session 21. Jan 26 18:19:08.857000 audit[5145]: NETFILTER_CFG table=nat:147 family=2 entries=20 op=nft_register_rule pid=5145 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:19:08.857000 audit[5145]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe7009f570 a2=0 a3=0 items=0 ppid=2938 pid=5145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:19:08.857000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:19:08.916000 audit[5149]: USER_ACCT pid=5149 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:08.917554 sshd[5149]: Accepted publickey for core from 10.0.0.1 port 32818 ssh2: RSA SHA256:FBsBDqAn2CSSEioDgYmIdR2sMG3PuCcnBTUzWqQs7BE Jan 26 18:19:08.918000 audit[5149]: CRED_ACQ pid=5149 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:08.918000 audit[5149]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc7a847270 a2=3 a3=0 items=0 ppid=1 pid=5149 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:19:08.918000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:19:08.920535 sshd-session[5149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:19:08.926730 systemd-logind[1579]: New session 22 of user core. Jan 26 18:19:08.940063 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 26 18:19:08.942000 audit[5149]: USER_START pid=5149 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:08.944000 audit[5153]: CRED_ACQ pid=5153 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:09.128085 sshd[5153]: Connection closed by 10.0.0.1 port 32818 Jan 26 18:19:09.128749 sshd-session[5149]: pam_unix(sshd:session): session closed for user core Jan 26 18:19:09.131000 audit[5149]: USER_END pid=5149 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:09.131000 audit[5149]: CRED_DISP pid=5149 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:09.141191 systemd[1]: sshd@20-10.0.0.64:22-10.0.0.1:32818.service: Deactivated successfully. Jan 26 18:19:09.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.64:22-10.0.0.1:32818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:19:09.143497 systemd[1]: session-22.scope: Deactivated successfully. Jan 26 18:19:09.144996 systemd-logind[1579]: Session 22 logged out. Waiting for processes to exit. Jan 26 18:19:09.149146 systemd[1]: Started sshd@21-10.0.0.64:22-10.0.0.1:32826.service - OpenSSH per-connection server daemon (10.0.0.1:32826). Jan 26 18:19:09.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.64:22-10.0.0.1:32826 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:19:09.150294 systemd-logind[1579]: Removed session 22. Jan 26 18:19:09.215000 audit[5164]: USER_ACCT pid=5164 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:09.217948 sshd[5164]: Accepted publickey for core from 10.0.0.1 port 32826 ssh2: RSA SHA256:FBsBDqAn2CSSEioDgYmIdR2sMG3PuCcnBTUzWqQs7BE Jan 26 18:19:09.217000 audit[5164]: CRED_ACQ pid=5164 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:09.218000 audit[5164]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd3cef1bc0 a2=3 a3=0 items=0 ppid=1 pid=5164 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:19:09.218000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:19:09.220595 sshd-session[5164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:19:09.231302 systemd-logind[1579]: New session 23 of user core. Jan 26 18:19:09.235159 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 26 18:19:09.237009 kubelet[2826]: E0126 18:19:09.236901 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x4jch" podUID="6589570b-6489-4043-b23c-e5a49733eb4e" Jan 26 18:19:09.242000 audit[5164]: USER_START pid=5164 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:09.245000 audit[5168]: CRED_ACQ pid=5168 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:09.358053 sshd[5168]: Connection closed by 10.0.0.1 port 32826 Jan 26 18:19:09.358464 sshd-session[5164]: pam_unix(sshd:session): session closed for user core Jan 26 18:19:09.359000 audit[5164]: USER_END pid=5164 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:09.359000 audit[5164]: CRED_DISP pid=5164 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:09.364095 systemd[1]: sshd@21-10.0.0.64:22-10.0.0.1:32826.service: Deactivated successfully. Jan 26 18:19:09.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.64:22-10.0.0.1:32826 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:19:09.366241 systemd[1]: session-23.scope: Deactivated successfully. Jan 26 18:19:09.367265 systemd-logind[1579]: Session 23 logged out. Waiting for processes to exit. Jan 26 18:19:09.368516 systemd-logind[1579]: Removed session 23. Jan 26 18:19:12.232492 containerd[1597]: time="2026-01-26T18:19:12.232286813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 26 18:19:12.307116 containerd[1597]: time="2026-01-26T18:19:12.307011658Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:19:12.308714 containerd[1597]: time="2026-01-26T18:19:12.308559260Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 26 18:19:12.308714 containerd[1597]: time="2026-01-26T18:19:12.308690997Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 26 18:19:12.309036 kubelet[2826]: E0126 18:19:12.308938 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 26 18:19:12.309036 kubelet[2826]: E0126 18:19:12.309023 2826 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 26 18:19:12.309389 kubelet[2826]: E0126 18:19:12.309145 2826 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:84b790fb7a3241c08a1d443333567276,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bgncg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-bf8465869-rt8sq_calico-system(7dfdaf1a-1fc3-4ffb-9232-005b9874f7f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 26 18:19:12.311482 containerd[1597]: time="2026-01-26T18:19:12.311428053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 26 18:19:12.378309 containerd[1597]: time="2026-01-26T18:19:12.378129843Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:19:12.379509 containerd[1597]: time="2026-01-26T18:19:12.379455131Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 26 18:19:12.379561 containerd[1597]: time="2026-01-26T18:19:12.379539698Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 26 18:19:12.379986 kubelet[2826]: E0126 18:19:12.379932 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 26 18:19:12.380103 kubelet[2826]: E0126 18:19:12.380003 2826 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 26 18:19:12.380179 kubelet[2826]: E0126 18:19:12.380123 2826 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bgncg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-bf8465869-rt8sq_calico-system(7dfdaf1a-1fc3-4ffb-9232-005b9874f7f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 26 18:19:12.381340 kubelet[2826]: E0126 18:19:12.381268 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-bf8465869-rt8sq" podUID="7dfdaf1a-1fc3-4ffb-9232-005b9874f7f8" Jan 26 18:19:14.375483 systemd[1]: Started sshd@22-10.0.0.64:22-10.0.0.1:32838.service - OpenSSH per-connection server daemon (10.0.0.1:32838). Jan 26 18:19:14.388583 kernel: kauditd_printk_skb: 57 callbacks suppressed Jan 26 18:19:14.388761 kernel: audit: type=1130 audit(1769451554.375:861): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.64:22-10.0.0.1:32838 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:19:14.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.64:22-10.0.0.1:32838 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:19:14.459000 audit[5214]: USER_ACCT pid=5214 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:14.461100 sshd[5214]: Accepted publickey for core from 10.0.0.1 port 32838 ssh2: RSA SHA256:FBsBDqAn2CSSEioDgYmIdR2sMG3PuCcnBTUzWqQs7BE Jan 26 18:19:14.462400 sshd-session[5214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:19:14.460000 audit[5214]: CRED_ACQ pid=5214 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:14.475322 systemd-logind[1579]: New session 24 of user core. Jan 26 18:19:14.479927 kernel: audit: type=1101 audit(1769451554.459:862): pid=5214 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:14.480000 kernel: audit: type=1103 audit(1769451554.460:863): pid=5214 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:14.480040 kernel: audit: type=1006 audit(1769451554.460:864): pid=5214 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 26 18:19:14.487122 kernel: audit: type=1300 audit(1769451554.460:864): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd9d6e770 a2=3 a3=0 items=0 ppid=1 pid=5214 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:19:14.460000 audit[5214]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd9d6e770 a2=3 a3=0 items=0 ppid=1 pid=5214 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:19:14.498560 kernel: audit: type=1327 audit(1769451554.460:864): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:19:14.460000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:19:14.506582 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 26 18:19:14.511000 audit[5214]: USER_START pid=5214 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:14.515000 audit[5219]: CRED_ACQ pid=5219 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:14.534660 kernel: audit: type=1105 audit(1769451554.511:865): pid=5214 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:14.534740 kernel: audit: type=1103 audit(1769451554.515:866): pid=5219 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:14.537000 audit[5223]: NETFILTER_CFG table=filter:148 family=2 entries=26 op=nft_register_rule pid=5223 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:19:14.555239 kernel: audit: type=1325 audit(1769451554.537:867): table=filter:148 family=2 entries=26 op=nft_register_rule pid=5223 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:19:14.555296 kernel: audit: type=1300 audit(1769451554.537:867): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff4c0e5490 a2=0 a3=7fff4c0e547c items=0 ppid=2938 pid=5223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:19:14.537000 audit[5223]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff4c0e5490 a2=0 a3=7fff4c0e547c items=0 ppid=2938 pid=5223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:19:14.537000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:19:14.562000 audit[5223]: NETFILTER_CFG table=nat:149 family=2 entries=104 op=nft_register_chain pid=5223 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 26 18:19:14.562000 audit[5223]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7fff4c0e5490 a2=0 a3=7fff4c0e547c items=0 ppid=2938 pid=5223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:19:14.562000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 26 18:19:14.611812 sshd[5219]: Connection closed by 10.0.0.1 port 32838 Jan 26 18:19:14.612214 sshd-session[5214]: pam_unix(sshd:session): session closed for user core Jan 26 18:19:14.612000 audit[5214]: USER_END pid=5214 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:14.613000 audit[5214]: CRED_DISP pid=5214 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:14.618975 systemd[1]: sshd@22-10.0.0.64:22-10.0.0.1:32838.service: Deactivated successfully. Jan 26 18:19:14.618000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.64:22-10.0.0.1:32838 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:19:14.621255 systemd[1]: session-24.scope: Deactivated successfully. Jan 26 18:19:14.622580 systemd-logind[1579]: Session 24 logged out. Waiting for processes to exit. Jan 26 18:19:14.624427 systemd-logind[1579]: Removed session 24. Jan 26 18:19:15.232474 containerd[1597]: time="2026-01-26T18:19:15.231949957Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 26 18:19:15.303473 containerd[1597]: time="2026-01-26T18:19:15.303306873Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:19:15.304795 containerd[1597]: time="2026-01-26T18:19:15.304659793Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 26 18:19:15.304795 containerd[1597]: time="2026-01-26T18:19:15.304803391Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 26 18:19:15.305120 kubelet[2826]: E0126 18:19:15.305052 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 26 18:19:15.305471 kubelet[2826]: E0126 18:19:15.305118 2826 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 26 18:19:15.305471 kubelet[2826]: E0126 18:19:15.305278 2826 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d2kbk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-hc8sx_calico-system(e2e25c6b-345b-4146-b644-2efa5b4232f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 26 18:19:15.306636 kubelet[2826]: E0126 18:19:15.306576 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hc8sx" podUID="e2e25c6b-345b-4146-b644-2efa5b4232f3" Jan 26 18:19:18.232051 containerd[1597]: time="2026-01-26T18:19:18.231910233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 26 18:19:18.321129 containerd[1597]: time="2026-01-26T18:19:18.321037880Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:19:18.322703 containerd[1597]: time="2026-01-26T18:19:18.322567390Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 26 18:19:18.322703 containerd[1597]: time="2026-01-26T18:19:18.322607878Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 26 18:19:18.323058 kubelet[2826]: E0126 18:19:18.323009 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 26 18:19:18.323391 kubelet[2826]: E0126 18:19:18.323123 2826 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 26 18:19:18.323391 kubelet[2826]: E0126 18:19:18.323348 2826 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j5cx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6df84d7cd-llmh4_calico-system(abefa45c-cb84-41f4-b47b-49099e1244be): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 26 18:19:18.324880 kubelet[2826]: E0126 18:19:18.324692 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6df84d7cd-llmh4" podUID="abefa45c-cb84-41f4-b47b-49099e1244be" Jan 26 18:19:19.630084 systemd[1]: Started sshd@23-10.0.0.64:22-10.0.0.1:35786.service - OpenSSH per-connection server daemon (10.0.0.1:35786). Jan 26 18:19:19.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.64:22-10.0.0.1:35786 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:19:19.640810 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 26 18:19:19.640939 kernel: audit: type=1130 audit(1769451559.629:872): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.64:22-10.0.0.1:35786 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:19:19.744000 audit[5235]: USER_ACCT pid=5235 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:19.748041 sshd-session[5235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:19:19.755493 sshd[5235]: Accepted publickey for core from 10.0.0.1 port 35786 ssh2: RSA SHA256:FBsBDqAn2CSSEioDgYmIdR2sMG3PuCcnBTUzWqQs7BE Jan 26 18:19:19.744000 audit[5235]: CRED_ACQ pid=5235 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:19.757393 systemd-logind[1579]: New session 25 of user core. Jan 26 18:19:19.764954 kernel: audit: type=1101 audit(1769451559.744:873): pid=5235 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:19.765013 kernel: audit: type=1103 audit(1769451559.744:874): pid=5235 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:19.765036 kernel: audit: type=1006 audit(1769451559.744:875): pid=5235 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 26 18:19:19.744000 audit[5235]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd44078ee0 a2=3 a3=0 items=0 ppid=1 pid=5235 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:19:19.781871 kernel: audit: type=1300 audit(1769451559.744:875): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd44078ee0 a2=3 a3=0 items=0 ppid=1 pid=5235 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:19:19.781927 kernel: audit: type=1327 audit(1769451559.744:875): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:19:19.744000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:19:19.788151 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 26 18:19:19.796000 audit[5235]: USER_START pid=5235 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:19.810915 kernel: audit: type=1105 audit(1769451559.796:876): pid=5235 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:19.812000 audit[5239]: CRED_ACQ pid=5239 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:19.826861 kernel: audit: type=1103 audit(1769451559.812:877): pid=5239 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:19.916195 sshd[5239]: Connection closed by 10.0.0.1 port 35786 Jan 26 18:19:19.917262 sshd-session[5235]: pam_unix(sshd:session): session closed for user core Jan 26 18:19:19.920000 audit[5235]: USER_END pid=5235 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:19.924895 systemd-logind[1579]: Session 25 logged out. Waiting for processes to exit. Jan 26 18:19:19.925210 systemd[1]: sshd@23-10.0.0.64:22-10.0.0.1:35786.service: Deactivated successfully. Jan 26 18:19:19.927585 systemd[1]: session-25.scope: Deactivated successfully. Jan 26 18:19:19.929979 systemd-logind[1579]: Removed session 25. Jan 26 18:19:19.920000 audit[5235]: CRED_DISP pid=5235 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:19.941519 kernel: audit: type=1106 audit(1769451559.920:878): pid=5235 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:19.941571 kernel: audit: type=1104 audit(1769451559.920:879): pid=5235 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:19.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.64:22-10.0.0.1:35786 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:19:20.233028 containerd[1597]: time="2026-01-26T18:19:20.232415392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 26 18:19:20.352397 containerd[1597]: time="2026-01-26T18:19:20.352349846Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:19:20.353989 containerd[1597]: time="2026-01-26T18:19:20.353927216Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 26 18:19:20.354078 containerd[1597]: time="2026-01-26T18:19:20.354037087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 26 18:19:20.354366 kubelet[2826]: E0126 18:19:20.354220 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 26 18:19:20.354366 kubelet[2826]: E0126 18:19:20.354348 2826 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 26 18:19:20.355038 kubelet[2826]: E0126 18:19:20.354468 2826 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xv6jh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6dc747cc9d-4d52w_calico-apiserver(2acaefc0-3f5b-4b21-902d-9d452b4924d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 26 18:19:20.355895 kubelet[2826]: E0126 18:19:20.355754 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dc747cc9d-4d52w" podUID="2acaefc0-3f5b-4b21-902d-9d452b4924d3" Jan 26 18:19:21.232411 containerd[1597]: time="2026-01-26T18:19:21.232352184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 26 18:19:21.301901 containerd[1597]: time="2026-01-26T18:19:21.301710267Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:19:21.303133 containerd[1597]: time="2026-01-26T18:19:21.303070448Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 26 18:19:21.303341 containerd[1597]: time="2026-01-26T18:19:21.303140708Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 26 18:19:21.303372 kubelet[2826]: E0126 18:19:21.303330 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 26 18:19:21.303413 kubelet[2826]: E0126 18:19:21.303380 2826 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 26 18:19:21.303615 kubelet[2826]: E0126 18:19:21.303494 2826 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-df5rz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6dc747cc9d-68rtx_calico-apiserver(6c6db6ff-501e-4b31-91b6-e1eb34423484): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 26 18:19:21.305429 kubelet[2826]: E0126 18:19:21.305135 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dc747cc9d-68rtx" podUID="6c6db6ff-501e-4b31-91b6-e1eb34423484" Jan 26 18:19:23.232293 containerd[1597]: time="2026-01-26T18:19:23.231987368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 26 18:19:23.316902 containerd[1597]: time="2026-01-26T18:19:23.316717417Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:19:23.318371 containerd[1597]: time="2026-01-26T18:19:23.318323190Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 26 18:19:23.318371 containerd[1597]: time="2026-01-26T18:19:23.318360831Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 26 18:19:23.319200 kubelet[2826]: E0126 18:19:23.319088 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 26 18:19:23.320038 kubelet[2826]: E0126 18:19:23.319180 2826 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 26 18:19:23.320247 kubelet[2826]: E0126 18:19:23.320125 2826 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s9st6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-x4jch_calico-system(6589570b-6489-4043-b23c-e5a49733eb4e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 26 18:19:23.325642 containerd[1597]: time="2026-01-26T18:19:23.325453355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 26 18:19:23.425875 containerd[1597]: time="2026-01-26T18:19:23.425714721Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 26 18:19:23.427745 containerd[1597]: time="2026-01-26T18:19:23.427672741Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 26 18:19:23.427953 containerd[1597]: time="2026-01-26T18:19:23.427740518Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 26 18:19:23.428202 kubelet[2826]: E0126 18:19:23.428132 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 26 18:19:23.428272 kubelet[2826]: E0126 18:19:23.428202 2826 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 26 18:19:23.428502 kubelet[2826]: E0126 18:19:23.428347 2826 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s9st6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-x4jch_calico-system(6589570b-6489-4043-b23c-e5a49733eb4e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 26 18:19:23.430074 kubelet[2826]: E0126 18:19:23.430001 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x4jch" podUID="6589570b-6489-4043-b23c-e5a49733eb4e" Jan 26 18:19:24.230786 kubelet[2826]: E0126 18:19:24.230745 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:19:24.933135 systemd[1]: Started sshd@24-10.0.0.64:22-10.0.0.1:54962.service - OpenSSH per-connection server daemon (10.0.0.1:54962). Jan 26 18:19:24.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.64:22-10.0.0.1:54962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:19:24.935486 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 26 18:19:24.935551 kernel: audit: type=1130 audit(1769451564.932:881): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.64:22-10.0.0.1:54962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:19:25.022000 audit[5253]: USER_ACCT pid=5253 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:25.023635 sshd[5253]: Accepted publickey for core from 10.0.0.1 port 54962 ssh2: RSA SHA256:FBsBDqAn2CSSEioDgYmIdR2sMG3PuCcnBTUzWqQs7BE Jan 26 18:19:25.026787 sshd-session[5253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:19:25.024000 audit[5253]: CRED_ACQ pid=5253 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:25.035922 systemd-logind[1579]: New session 26 of user core. Jan 26 18:19:25.043772 kernel: audit: type=1101 audit(1769451565.022:882): pid=5253 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:25.043894 kernel: audit: type=1103 audit(1769451565.024:883): pid=5253 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:25.043934 kernel: audit: type=1006 audit(1769451565.024:884): pid=5253 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 26 18:19:25.049296 kernel: audit: type=1300 audit(1769451565.024:884): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff92f74ae0 a2=3 a3=0 items=0 ppid=1 pid=5253 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:19:25.024000 audit[5253]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff92f74ae0 a2=3 a3=0 items=0 ppid=1 pid=5253 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:19:25.024000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:19:25.063112 kernel: audit: type=1327 audit(1769451565.024:884): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:19:25.068255 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 26 18:19:25.071000 audit[5253]: USER_START pid=5253 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:25.074000 audit[5257]: CRED_ACQ pid=5257 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:25.097748 kernel: audit: type=1105 audit(1769451565.071:885): pid=5253 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:25.097922 kernel: audit: type=1103 audit(1769451565.074:886): pid=5257 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:25.191033 sshd[5257]: Connection closed by 10.0.0.1 port 54962 Jan 26 18:19:25.192150 sshd-session[5253]: pam_unix(sshd:session): session closed for user core Jan 26 18:19:25.193000 audit[5253]: USER_END pid=5253 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:25.204551 systemd[1]: sshd@24-10.0.0.64:22-10.0.0.1:54962.service: Deactivated successfully. Jan 26 18:19:25.206914 kernel: audit: type=1106 audit(1769451565.193:887): pid=5253 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:25.207432 kernel: audit: type=1104 audit(1769451565.193:888): pid=5253 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:25.193000 audit[5253]: CRED_DISP pid=5253 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:25.207745 systemd[1]: session-26.scope: Deactivated successfully. Jan 26 18:19:25.210398 systemd-logind[1579]: Session 26 logged out. Waiting for processes to exit. Jan 26 18:19:25.212637 systemd-logind[1579]: Removed session 26. Jan 26 18:19:25.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.64:22-10.0.0.1:54962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:19:26.232761 kubelet[2826]: E0126 18:19:26.232674 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-bf8465869-rt8sq" podUID="7dfdaf1a-1fc3-4ffb-9232-005b9874f7f8" Jan 26 18:19:28.231363 kubelet[2826]: E0126 18:19:28.231265 2826 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 26 18:19:30.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.64:22-10.0.0.1:54974 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:19:30.206687 systemd[1]: Started sshd@25-10.0.0.64:22-10.0.0.1:54974.service - OpenSSH per-connection server daemon (10.0.0.1:54974). Jan 26 18:19:30.209014 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 26 18:19:30.209122 kernel: audit: type=1130 audit(1769451570.205:890): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.64:22-10.0.0.1:54974 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:19:30.231261 kubelet[2826]: E0126 18:19:30.231222 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hc8sx" podUID="e2e25c6b-345b-4146-b644-2efa5b4232f3" Jan 26 18:19:30.296000 audit[5272]: USER_ACCT pid=5272 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:30.298078 sshd[5272]: Accepted publickey for core from 10.0.0.1 port 54974 ssh2: RSA SHA256:FBsBDqAn2CSSEioDgYmIdR2sMG3PuCcnBTUzWqQs7BE Jan 26 18:19:30.300224 sshd-session[5272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 26 18:19:30.298000 audit[5272]: CRED_ACQ pid=5272 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:30.309125 systemd-logind[1579]: New session 27 of user core. Jan 26 18:19:30.317281 kernel: audit: type=1101 audit(1769451570.296:891): pid=5272 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:30.317376 kernel: audit: type=1103 audit(1769451570.298:892): pid=5272 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:30.317400 kernel: audit: type=1006 audit(1769451570.298:893): pid=5272 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 26 18:19:30.322523 kernel: audit: type=1300 audit(1769451570.298:893): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe6ecdb500 a2=3 a3=0 items=0 ppid=1 pid=5272 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:19:30.298000 audit[5272]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe6ecdb500 a2=3 a3=0 items=0 ppid=1 pid=5272 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 26 18:19:30.298000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:19:30.336220 kernel: audit: type=1327 audit(1769451570.298:893): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 26 18:19:30.342197 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 26 18:19:30.344000 audit[5272]: USER_START pid=5272 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:30.346000 audit[5276]: CRED_ACQ pid=5276 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:30.365387 kernel: audit: type=1105 audit(1769451570.344:894): pid=5272 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:30.365456 kernel: audit: type=1103 audit(1769451570.346:895): pid=5276 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:30.436774 sshd[5276]: Connection closed by 10.0.0.1 port 54974 Jan 26 18:19:30.437404 sshd-session[5272]: pam_unix(sshd:session): session closed for user core Jan 26 18:19:30.438000 audit[5272]: USER_END pid=5272 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:30.442167 systemd[1]: sshd@25-10.0.0.64:22-10.0.0.1:54974.service: Deactivated successfully. Jan 26 18:19:30.444414 systemd[1]: session-27.scope: Deactivated successfully. Jan 26 18:19:30.445751 systemd-logind[1579]: Session 27 logged out. Waiting for processes to exit. Jan 26 18:19:30.447578 systemd-logind[1579]: Removed session 27. Jan 26 18:19:30.438000 audit[5272]: CRED_DISP pid=5272 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:30.465868 kernel: audit: type=1106 audit(1769451570.438:896): pid=5272 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:30.465934 kernel: audit: type=1104 audit(1769451570.438:897): pid=5272 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 26 18:19:30.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.64:22-10.0.0.1:54974 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 26 18:19:31.232083 kubelet[2826]: E0126 18:19:31.231939 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6df84d7cd-llmh4" podUID="abefa45c-cb84-41f4-b47b-49099e1244be" Jan 26 18:19:32.232246 kubelet[2826]: E0126 18:19:32.232119 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dc747cc9d-4d52w" podUID="2acaefc0-3f5b-4b21-902d-9d452b4924d3"