Jan 24 11:57:45.806361 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Sat Jan 24 09:07:34 -00 2026 Jan 24 11:57:45.806400 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7953d3c7acaad6ee79638a10c67ea9f0b3a8597919989b6fbf2f9a1742d4ba63 Jan 24 11:57:45.806418 kernel: BIOS-provided physical RAM map: Jan 24 11:57:45.806427 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 24 11:57:45.806436 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 24 11:57:45.806447 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 24 11:57:45.806460 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 24 11:57:45.806469 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 24 11:57:45.806513 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 24 11:57:45.806525 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 24 11:57:45.806658 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 24 11:57:45.806673 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 24 11:57:45.806683 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 24 11:57:45.806692 kernel: NX (Execute Disable) protection: active Jan 24 11:57:45.806705 kernel: APIC: Static calls initialized Jan 24 11:57:45.806723 kernel: SMBIOS 2.8 present. Jan 24 11:57:45.806767 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 24 11:57:45.806778 kernel: DMI: Memory slots populated: 1/1 Jan 24 11:57:45.806788 kernel: Hypervisor detected: KVM Jan 24 11:57:45.806800 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 24 11:57:45.806811 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 24 11:57:45.806822 kernel: kvm-clock: using sched offset of 15142083056 cycles Jan 24 11:57:45.806836 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 24 11:57:45.806848 kernel: tsc: Detected 2445.426 MHz processor Jan 24 11:57:45.806867 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 24 11:57:45.806880 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 24 11:57:45.806890 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 24 11:57:45.806900 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 24 11:57:45.806910 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 24 11:57:45.806920 kernel: Using GB pages for direct mapping Jan 24 11:57:45.806930 kernel: ACPI: Early table checksum verification disabled Jan 24 11:57:45.806946 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 24 11:57:45.806959 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 11:57:45.806970 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 11:57:45.806980 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 11:57:45.806990 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 24 11:57:45.807000 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 11:57:45.807010 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 11:57:45.807025 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 11:57:45.807038 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 11:57:45.807056 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 24 11:57:45.807070 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 24 11:57:45.807082 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 24 11:57:45.807096 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 24 11:57:45.807114 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 24 11:57:45.807124 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 24 11:57:45.807135 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 24 11:57:45.807145 kernel: No NUMA configuration found Jan 24 11:57:45.807155 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 24 11:57:45.807166 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jan 24 11:57:45.807184 kernel: Zone ranges: Jan 24 11:57:45.807196 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 24 11:57:45.807206 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 24 11:57:45.807216 kernel: Normal empty Jan 24 11:57:45.807226 kernel: Device empty Jan 24 11:57:45.807237 kernel: Movable zone start for each node Jan 24 11:57:45.807248 kernel: Early memory node ranges Jan 24 11:57:45.807266 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 24 11:57:45.807279 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 24 11:57:45.807292 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 24 11:57:45.807304 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 11:57:45.807317 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 24 11:57:45.807382 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 24 11:57:45.807395 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 24 11:57:45.807413 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 24 11:57:45.807426 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 24 11:57:45.807437 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 24 11:57:45.807479 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 24 11:57:45.807493 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 24 11:57:45.807506 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 24 11:57:45.807518 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 24 11:57:45.807531 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 24 11:57:45.807648 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 24 11:57:45.807663 kernel: TSC deadline timer available Jan 24 11:57:45.807675 kernel: CPU topo: Max. logical packages: 1 Jan 24 11:57:45.807685 kernel: CPU topo: Max. logical dies: 1 Jan 24 11:57:45.807695 kernel: CPU topo: Max. dies per package: 1 Jan 24 11:57:45.807705 kernel: CPU topo: Max. threads per core: 1 Jan 24 11:57:45.807716 kernel: CPU topo: Num. cores per package: 4 Jan 24 11:57:45.807734 kernel: CPU topo: Num. threads per package: 4 Jan 24 11:57:45.807747 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 24 11:57:45.807758 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 24 11:57:45.807768 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 24 11:57:45.807779 kernel: kvm-guest: setup PV sched yield Jan 24 11:57:45.807789 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 24 11:57:45.807799 kernel: Booting paravirtualized kernel on KVM Jan 24 11:57:45.807818 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 24 11:57:45.807831 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 24 11:57:45.807844 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 24 11:57:45.807857 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 24 11:57:45.807869 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 24 11:57:45.807882 kernel: kvm-guest: PV spinlocks enabled Jan 24 11:57:45.807896 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 24 11:57:45.807914 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7953d3c7acaad6ee79638a10c67ea9f0b3a8597919989b6fbf2f9a1742d4ba63 Jan 24 11:57:45.807925 kernel: random: crng init done Jan 24 11:57:45.807935 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 24 11:57:45.807945 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 24 11:57:45.807956 kernel: Fallback order for Node 0: 0 Jan 24 11:57:45.807968 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jan 24 11:57:45.807982 kernel: Policy zone: DMA32 Jan 24 11:57:45.807998 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 24 11:57:45.808008 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 24 11:57:45.808018 kernel: ftrace: allocating 40128 entries in 157 pages Jan 24 11:57:45.808029 kernel: ftrace: allocated 157 pages with 5 groups Jan 24 11:57:45.808039 kernel: Dynamic Preempt: voluntary Jan 24 11:57:45.808052 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 24 11:57:45.808072 kernel: rcu: RCU event tracing is enabled. Jan 24 11:57:45.808091 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 24 11:57:45.808105 kernel: Trampoline variant of Tasks RCU enabled. Jan 24 11:57:45.808154 kernel: Rude variant of Tasks RCU enabled. Jan 24 11:57:45.808167 kernel: Tracing variant of Tasks RCU enabled. Jan 24 11:57:45.808177 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 24 11:57:45.808188 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 24 11:57:45.808200 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 11:57:45.808219 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 11:57:45.808230 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 11:57:45.808241 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 24 11:57:45.808252 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 24 11:57:45.808274 kernel: Console: colour VGA+ 80x25 Jan 24 11:57:45.808291 kernel: printk: legacy console [ttyS0] enabled Jan 24 11:57:45.808304 kernel: ACPI: Core revision 20240827 Jan 24 11:57:45.808317 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 24 11:57:45.808331 kernel: APIC: Switch to symmetric I/O mode setup Jan 24 11:57:45.808349 kernel: x2apic enabled Jan 24 11:57:45.808364 kernel: APIC: Switched APIC routing to: physical x2apic Jan 24 11:57:45.808412 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 24 11:57:45.808425 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 24 11:57:45.808442 kernel: kvm-guest: setup PV IPIs Jan 24 11:57:45.808456 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 24 11:57:45.808467 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 24 11:57:45.808478 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 24 11:57:45.808489 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 24 11:57:45.808500 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 24 11:57:45.808511 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 24 11:57:45.808529 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 24 11:57:45.808633 kernel: Spectre V2 : Mitigation: Retpolines Jan 24 11:57:45.808649 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 24 11:57:45.808660 kernel: Speculative Store Bypass: Vulnerable Jan 24 11:57:45.808671 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 24 11:57:45.808683 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 24 11:57:45.808696 kernel: active return thunk: srso_alias_return_thunk Jan 24 11:57:45.808717 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 24 11:57:45.808728 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 24 11:57:45.808739 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 11:57:45.808750 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 24 11:57:45.808761 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 24 11:57:45.808772 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 24 11:57:45.808785 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 24 11:57:45.808804 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 24 11:57:45.808818 kernel: Freeing SMP alternatives memory: 32K Jan 24 11:57:45.808831 kernel: pid_max: default: 32768 minimum: 301 Jan 24 11:57:45.808845 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 24 11:57:45.808860 kernel: landlock: Up and running. Jan 24 11:57:45.808872 kernel: SELinux: Initializing. Jan 24 11:57:45.808883 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 11:57:45.808899 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 11:57:45.808952 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 24 11:57:45.808965 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 24 11:57:45.808976 kernel: signal: max sigframe size: 1776 Jan 24 11:57:45.808986 kernel: rcu: Hierarchical SRCU implementation. Jan 24 11:57:45.808998 kernel: rcu: Max phase no-delay instances is 400. Jan 24 11:57:45.809009 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 24 11:57:45.809028 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 24 11:57:45.809041 kernel: smp: Bringing up secondary CPUs ... Jan 24 11:57:45.809054 kernel: smpboot: x86: Booting SMP configuration: Jan 24 11:57:45.809068 kernel: .... node #0, CPUs: #1 #2 #3 Jan 24 11:57:45.809081 kernel: smp: Brought up 1 node, 4 CPUs Jan 24 11:57:45.809094 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 24 11:57:45.809108 kernel: Memory: 2445292K/2571752K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15536K init, 2500K bss, 120520K reserved, 0K cma-reserved) Jan 24 11:57:45.810775 kernel: devtmpfs: initialized Jan 24 11:57:45.810786 kernel: x86/mm: Memory block size: 128MB Jan 24 11:57:45.810795 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 24 11:57:45.810804 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 24 11:57:45.810818 kernel: pinctrl core: initialized pinctrl subsystem Jan 24 11:57:45.810833 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 24 11:57:45.810847 kernel: audit: initializing netlink subsys (disabled) Jan 24 11:57:45.810868 kernel: audit: type=2000 audit(1769255854.284:1): state=initialized audit_enabled=0 res=1 Jan 24 11:57:45.810912 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 24 11:57:45.810924 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 24 11:57:45.810936 kernel: cpuidle: using governor menu Jan 24 11:57:45.810951 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 24 11:57:45.810966 kernel: dca service started, version 1.12.1 Jan 24 11:57:45.810979 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 24 11:57:45.810996 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 24 11:57:45.811007 kernel: PCI: Using configuration type 1 for base access Jan 24 11:57:45.811019 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 24 11:57:45.811031 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 24 11:57:45.811043 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 24 11:57:45.811057 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 24 11:57:45.811071 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 24 11:57:45.811089 kernel: ACPI: Added _OSI(Module Device) Jan 24 11:57:45.811102 kernel: ACPI: Added _OSI(Processor Device) Jan 24 11:57:45.811110 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 24 11:57:45.811118 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 24 11:57:45.811126 kernel: ACPI: Interpreter enabled Jan 24 11:57:45.811134 kernel: ACPI: PM: (supports S0 S3 S5) Jan 24 11:57:45.811142 kernel: ACPI: Using IOAPIC for interrupt routing Jan 24 11:57:45.811150 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 24 11:57:45.811162 kernel: PCI: Using E820 reservations for host bridge windows Jan 24 11:57:45.811274 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 24 11:57:45.811290 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 24 11:57:45.811914 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 24 11:57:45.812203 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 24 11:57:45.812431 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 24 11:57:45.812442 kernel: PCI host bridge to bus 0000:00 Jan 24 11:57:45.812756 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 24 11:57:45.812961 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 24 11:57:45.813157 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 24 11:57:45.813353 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 24 11:57:45.814172 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 24 11:57:45.814378 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 24 11:57:45.814671 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 24 11:57:45.816701 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 24 11:57:45.819936 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 24 11:57:45.821315 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 24 11:57:45.822244 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 24 11:57:45.823103 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 24 11:57:45.823706 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 24 11:57:45.824157 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 10742 usecs Jan 24 11:57:45.824727 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 24 11:57:45.825358 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jan 24 11:57:45.825747 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 24 11:57:45.829028 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 24 11:57:45.829530 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 24 11:57:45.829921 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jan 24 11:57:45.830225 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 24 11:57:45.830627 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 24 11:57:45.830925 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 24 11:57:45.831245 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jan 24 11:57:45.831659 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jan 24 11:57:45.831982 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 24 11:57:45.832297 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 24 11:57:45.833683 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 24 11:57:45.834233 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 24 11:57:45.834634 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 24 11:57:45.834927 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jan 24 11:57:45.835194 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jan 24 11:57:45.835490 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 24 11:57:45.835871 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 24 11:57:45.835893 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 24 11:57:45.835907 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 24 11:57:45.835918 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 24 11:57:45.835930 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 24 11:57:45.835947 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 24 11:57:45.835958 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 24 11:57:45.835970 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 24 11:57:45.835982 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 24 11:57:45.835993 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 24 11:57:45.836006 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 24 11:57:45.836019 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 24 11:57:45.836031 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 24 11:57:45.836048 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 24 11:57:45.836061 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 24 11:57:45.836073 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 24 11:57:45.836086 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 24 11:57:45.836098 kernel: iommu: Default domain type: Translated Jan 24 11:57:45.836110 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 24 11:57:45.836122 kernel: PCI: Using ACPI for IRQ routing Jan 24 11:57:45.836137 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 24 11:57:45.836149 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 24 11:57:45.836161 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 24 11:57:45.836430 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 24 11:57:45.836806 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 24 11:57:45.837081 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 24 11:57:45.837099 kernel: vgaarb: loaded Jan 24 11:57:45.837118 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 24 11:57:45.837131 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 24 11:57:45.837143 kernel: clocksource: Switched to clocksource kvm-clock Jan 24 11:57:45.837155 kernel: VFS: Disk quotas dquot_6.6.0 Jan 24 11:57:45.837167 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 24 11:57:45.837180 kernel: pnp: PnP ACPI init Jan 24 11:57:45.837485 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 24 11:57:45.837511 kernel: pnp: PnP ACPI: found 6 devices Jan 24 11:57:45.837524 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 24 11:57:45.837537 kernel: NET: Registered PF_INET protocol family Jan 24 11:57:45.837640 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 24 11:57:45.837653 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 24 11:57:45.837666 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 24 11:57:45.837684 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 24 11:57:45.837697 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 24 11:57:45.837710 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 24 11:57:45.837722 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 11:57:45.837734 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 11:57:45.837746 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 24 11:57:45.837757 kernel: NET: Registered PF_XDP protocol family Jan 24 11:57:45.838027 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 24 11:57:45.838283 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 24 11:57:45.838672 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 24 11:57:45.838981 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 24 11:57:45.839239 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 24 11:57:45.839500 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 24 11:57:45.839521 kernel: PCI: CLS 0 bytes, default 64 Jan 24 11:57:45.839669 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 24 11:57:45.839686 kernel: Initialise system trusted keyrings Jan 24 11:57:45.839699 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 24 11:57:45.839712 kernel: Key type asymmetric registered Jan 24 11:57:45.839725 kernel: Asymmetric key parser 'x509' registered Jan 24 11:57:45.839737 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 24 11:57:45.839748 kernel: io scheduler mq-deadline registered Jan 24 11:57:45.839765 kernel: io scheduler kyber registered Jan 24 11:57:45.839778 kernel: io scheduler bfq registered Jan 24 11:57:45.839791 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 24 11:57:45.839804 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 24 11:57:45.839817 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 24 11:57:45.839829 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 24 11:57:45.839841 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 24 11:57:45.839859 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 24 11:57:45.839871 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 24 11:57:45.839885 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 24 11:57:45.839897 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 24 11:57:45.840255 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 24 11:57:45.840811 kernel: rtc_cmos 00:04: registered as rtc0 Jan 24 11:57:45.840833 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 24 11:57:45.841122 kernel: rtc_cmos 00:04: setting system clock to 2026-01-24T11:57:41 UTC (1769255861) Jan 24 11:57:45.841411 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 24 11:57:45.841432 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 24 11:57:45.841446 kernel: NET: Registered PF_INET6 protocol family Jan 24 11:57:45.841460 kernel: Segment Routing with IPv6 Jan 24 11:57:45.841474 kernel: In-situ OAM (IOAM) with IPv6 Jan 24 11:57:45.841492 kernel: NET: Registered PF_PACKET protocol family Jan 24 11:57:45.841507 kernel: Key type dns_resolver registered Jan 24 11:57:45.841520 kernel: IPI shorthand broadcast: enabled Jan 24 11:57:45.841533 kernel: sched_clock: Marking stable (5919071122, 1486508961)->(8240439108, -834859025) Jan 24 11:57:45.843981 kernel: registered taskstats version 1 Jan 24 11:57:45.844003 kernel: Loading compiled-in X.509 certificates Jan 24 11:57:45.844018 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: a97c6138cc1b5c46f82656a7e055bcfc44b38b5c' Jan 24 11:57:45.844031 kernel: Demotion targets for Node 0: null Jan 24 11:57:45.844051 kernel: Key type .fscrypt registered Jan 24 11:57:45.844064 kernel: Key type fscrypt-provisioning registered Jan 24 11:57:45.844076 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 24 11:57:45.844089 kernel: ima: Allocated hash algorithm: sha1 Jan 24 11:57:45.844101 kernel: ima: No architecture policies found Jan 24 11:57:45.844153 kernel: clk: Disabling unused clocks Jan 24 11:57:45.844171 kernel: Freeing unused kernel image (initmem) memory: 15536K Jan 24 11:57:45.844183 kernel: Write protecting the kernel read-only data: 47104k Jan 24 11:57:45.844195 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 24 11:57:45.844208 kernel: Run /init as init process Jan 24 11:57:45.844221 kernel: with arguments: Jan 24 11:57:45.844233 kernel: /init Jan 24 11:57:45.844245 kernel: with environment: Jan 24 11:57:45.844257 kernel: HOME=/ Jan 24 11:57:45.844273 kernel: TERM=linux Jan 24 11:57:45.844286 kernel: SCSI subsystem initialized Jan 24 11:57:45.844299 kernel: libata version 3.00 loaded. Jan 24 11:57:45.844776 kernel: ahci 0000:00:1f.2: version 3.0 Jan 24 11:57:45.844798 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 24 11:57:45.845094 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 24 11:57:45.845403 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 24 11:57:45.846126 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 24 11:57:45.848170 kernel: scsi host0: ahci Jan 24 11:57:45.848533 kernel: scsi host1: ahci Jan 24 11:57:45.849062 kernel: scsi host2: ahci Jan 24 11:57:45.849425 kernel: scsi host3: ahci Jan 24 11:57:45.849891 kernel: scsi host4: ahci Jan 24 11:57:45.850203 kernel: scsi host5: ahci Jan 24 11:57:45.850219 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Jan 24 11:57:45.850229 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Jan 24 11:57:45.850238 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Jan 24 11:57:45.850246 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Jan 24 11:57:45.850260 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Jan 24 11:57:45.850268 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Jan 24 11:57:45.850276 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 24 11:57:45.850284 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 24 11:57:45.850292 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 24 11:57:45.850301 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 24 11:57:45.850309 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 24 11:57:45.850320 kernel: ata3.00: LPM support broken, forcing max_power Jan 24 11:57:45.850328 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 24 11:57:45.850336 kernel: ata3.00: applying bridge limits Jan 24 11:57:45.850344 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 24 11:57:45.850353 kernel: ata3.00: LPM support broken, forcing max_power Jan 24 11:57:45.850360 kernel: ata3.00: configured for UDMA/100 Jan 24 11:57:45.850850 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 24 11:57:45.851165 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 24 11:57:45.851387 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 24 11:57:45.851399 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 24 11:57:45.851408 kernel: GPT:16515071 != 27000831 Jan 24 11:57:45.851417 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 24 11:57:45.851425 kernel: GPT:16515071 != 27000831 Jan 24 11:57:45.851446 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 24 11:57:45.851462 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 11:57:45.851870 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 24 11:57:45.851889 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 24 11:57:45.852171 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 24 11:57:45.852188 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 24 11:57:45.852206 kernel: device-mapper: uevent: version 1.0.3 Jan 24 11:57:45.852219 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 24 11:57:45.852231 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 24 11:57:45.852244 kernel: raid6: avx2x4 gen() 20342 MB/s Jan 24 11:57:45.852256 kernel: raid6: avx2x2 gen() 20809 MB/s Jan 24 11:57:45.852268 kernel: raid6: avx2x1 gen() 15067 MB/s Jan 24 11:57:45.852280 kernel: raid6: using algorithm avx2x2 gen() 20809 MB/s Jan 24 11:57:45.852295 kernel: raid6: .... xor() 18489 MB/s, rmw enabled Jan 24 11:57:45.852313 kernel: raid6: using avx2x2 recovery algorithm Jan 24 11:57:45.852327 kernel: xor: automatically using best checksumming function avx Jan 24 11:57:45.852336 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 24 11:57:45.852347 kernel: BTRFS: device fsid d3bd77fc-0f38-45e2-bb37-1f1b4d0917b8 devid 1 transid 34 /dev/mapper/usr (253:0) scanned by mount (181) Jan 24 11:57:45.852358 kernel: BTRFS info (device dm-0): first mount of filesystem d3bd77fc-0f38-45e2-bb37-1f1b4d0917b8 Jan 24 11:57:45.852367 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 24 11:57:45.852375 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 24 11:57:45.852383 kernel: BTRFS info (device dm-0): enabling free space tree Jan 24 11:57:45.852391 kernel: loop: module loaded Jan 24 11:57:45.852399 kernel: loop0: detected capacity change from 0 to 100552 Jan 24 11:57:45.852407 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 24 11:57:45.852419 systemd[1]: Successfully made /usr/ read-only. Jan 24 11:57:45.852431 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 24 11:57:45.852440 systemd[1]: Detected virtualization kvm. Jan 24 11:57:45.852449 systemd[1]: Detected architecture x86-64. Jan 24 11:57:45.852466 systemd[1]: Running in initrd. Jan 24 11:57:45.852480 systemd[1]: No hostname configured, using default hostname. Jan 24 11:57:45.852499 systemd[1]: Hostname set to . Jan 24 11:57:45.852515 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 24 11:57:45.852530 systemd[1]: Queued start job for default target initrd.target. Jan 24 11:57:45.852622 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 24 11:57:45.852635 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 11:57:45.852644 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 11:57:45.852658 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 24 11:57:45.852667 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 11:57:45.852676 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 24 11:57:45.852685 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 24 11:57:45.852694 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 11:57:45.852702 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 11:57:45.852714 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 24 11:57:45.852722 systemd[1]: Reached target paths.target - Path Units. Jan 24 11:57:45.852731 systemd[1]: Reached target slices.target - Slice Units. Jan 24 11:57:45.852739 systemd[1]: Reached target swap.target - Swaps. Jan 24 11:57:45.852748 systemd[1]: Reached target timers.target - Timer Units. Jan 24 11:57:45.852757 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 11:57:45.852765 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 11:57:45.852777 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 24 11:57:45.852785 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 11:57:45.852794 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 24 11:57:45.852802 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 11:57:45.852810 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 11:57:45.852819 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 11:57:45.852834 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 11:57:45.852849 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 24 11:57:45.852863 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 24 11:57:45.852878 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 11:57:45.852891 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 24 11:57:45.852910 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 24 11:57:45.852922 systemd[1]: Starting systemd-fsck-usr.service... Jan 24 11:57:45.852938 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 11:57:45.852951 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 11:57:45.852964 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 11:57:45.852978 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 24 11:57:45.852996 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 11:57:45.853009 systemd[1]: Finished systemd-fsck-usr.service. Jan 24 11:57:45.853022 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 11:57:45.853074 systemd-journald[319]: Collecting audit messages is enabled. Jan 24 11:57:45.853105 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 24 11:57:45.853118 kernel: Bridge firewalling registered Jan 24 11:57:45.853131 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 11:57:45.853144 systemd-journald[319]: Journal started Jan 24 11:57:45.853179 systemd-journald[319]: Runtime Journal (/run/log/journal/27d534a7e1db44e38883bdb11c80e90e) is 6M, max 48.2M, 42.1M free. Jan 24 11:57:45.843818 systemd-modules-load[322]: Inserted module 'br_netfilter' Jan 24 11:57:45.870045 kernel: audit: type=1130 audit(1769255865.848:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:57:45.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:57:45.876845 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 11:57:45.899896 kernel: audit: type=1130 audit(1769255865.876:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:57:45.899988 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 11:57:45.900023 kernel: audit: type=1130 audit(1769255865.892:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:57:45.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:57:45.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:57:45.898927 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 11:57:46.125498 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 11:57:46.141399 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 11:57:46.187301 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 11:57:46.225511 kernel: audit: type=1130 audit(1769255866.192:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:57:46.227630 kernel: audit: type=1130 audit(1769255866.225:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:57:46.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:57:46.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:57:46.202782 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 11:57:46.270212 kernel: audit: type=1130 audit(1769255866.241:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:57:46.270251 kernel: audit: type=1334 audit(1769255866.257:8): prog-id=6 op=LOAD Jan 24 11:57:46.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:57:46.257000 audit: BPF prog-id=6 op=LOAD Jan 24 11:57:46.228228 systemd-tmpfiles[338]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 24 11:57:46.240999 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 11:57:46.297272 kernel: audit: type=1130 audit(1769255866.283:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:57:46.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:57:46.256703 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 11:57:46.261081 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 11:57:46.269841 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 11:57:46.329376 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 11:57:46.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:57:46.343212 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 24 11:57:46.357494 kernel: audit: type=1130 audit(1769255866.334:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:57:46.399257 systemd-resolved[346]: Positive Trust Anchors: Jan 24 11:57:46.399279 systemd-resolved[346]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 11:57:46.399285 systemd-resolved[346]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 24 11:57:46.399331 systemd-resolved[346]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 11:57:46.469508 dracut-cmdline[360]: dracut-109 Jan 24 11:57:46.469508 dracut-cmdline[360]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7953d3c7acaad6ee79638a10c67ea9f0b3a8597919989b6fbf2f9a1742d4ba63 Jan 24 11:57:46.520121 kernel: audit: type=1130 audit(1769255866.471:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:57:46.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:57:46.440858 systemd-resolved[346]: Defaulting to hostname 'linux'. Jan 24 11:57:46.443678 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 11:57:46.472055 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 11:57:46.764768 kernel: Loading iSCSI transport class v2.0-870. Jan 24 11:57:46.808721 kernel: iscsi: registered transport (tcp) Jan 24 11:57:46.926947 kernel: iscsi: registered transport (qla4xxx) Jan 24 11:57:46.927793 kernel: QLogic iSCSI HBA Driver Jan 24 11:57:46.989293 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 24 11:57:47.036515 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 11:57:47.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:57:47.047238 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 24 11:57:47.203311 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 24 11:57:47.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:57:47.208880 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 24 11:57:47.225175 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 24 11:57:47.296049 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 24 11:57:47.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:57:47.305000 audit: BPF prog-id=7 op=LOAD Jan 24 11:57:47.305000 audit: BPF prog-id=8 op=LOAD Jan 24 11:57:47.306906 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 11:57:47.438415 systemd-udevd[586]: Using default interface naming scheme 'v257'. Jan 24 11:57:47.490244 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 11:57:47.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:57:47.511282 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 24 11:57:47.605900 dracut-pre-trigger[625]: rd.md=0: removing MD RAID activation Jan 24 11:57:47.683502 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 11:57:47.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:57:47.691714 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 11:57:47.724899 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 11:57:47.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:57:47.748000 audit: BPF prog-id=9 op=LOAD Jan 24 11:57:47.751292 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 11:57:47.891231 systemd-networkd[724]: lo: Link UP Jan 24 11:57:47.891269 systemd-networkd[724]: lo: Gained carrier Jan 24 11:57:47.897033 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 11:57:47.901234 systemd[1]: Reached target network.target - Network. Jan 24 11:57:47.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:57:47.935793 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 11:57:47.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:57:47.952903 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 24 11:57:48.395420 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 24 11:57:48.436928 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 24 11:57:48.501359 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 24 11:57:48.573964 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 24 11:57:48.583645 kernel: cryptd: max_cpu_qlen set to 1000 Jan 24 11:57:48.674153 kernel: hrtimer: interrupt took 9115368 ns Jan 24 11:57:48.694483 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 24 11:57:48.718219 kernel: AES CTR mode by8 optimization enabled Jan 24 11:57:48.718850 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 24 11:57:48.723597 systemd-networkd[724]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 24 11:57:48.723652 systemd-networkd[724]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 11:57:48.729260 systemd-networkd[724]: eth0: Link UP Jan 24 11:57:48.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:57:48.729692 systemd-networkd[724]: eth0: Gained carrier Jan 24 11:57:48.729707 systemd-networkd[724]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 24 11:57:48.799666 disk-uuid[819]: Primary Header is updated. Jan 24 11:57:48.799666 disk-uuid[819]: Secondary Entries is updated. Jan 24 11:57:48.799666 disk-uuid[819]: Secondary Header is updated. Jan 24 11:57:48.744769 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 11:57:48.744853 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 11:57:48.764409 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 11:57:48.771230 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 11:57:48.779041 systemd-networkd[724]: eth0: DHCPv4 address 10.0.0.100/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 24 11:57:49.273055 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 24 11:57:49.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:57:49.340920 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 11:57:49.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:57:49.349149 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 11:57:49.364316 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 11:57:49.369238 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 11:57:49.382034 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 24 11:57:49.438102 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 24 11:57:49.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:57:50.134265 systemd-networkd[724]: eth0: Gained IPv6LL Jan 24 11:57:50.188761 disk-uuid[825]: Warning: The kernel is still using the old partition table. Jan 24 11:57:50.188761 disk-uuid[825]: The new table will be used at the next reboot or after you Jan 24 11:57:50.188761 disk-uuid[825]: run partprobe(8) or kpartx(8) Jan 24 11:57:50.188761 disk-uuid[825]: The operation has completed successfully. Jan 24 11:57:50.259147 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 24 11:57:50.262036 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 24 11:57:50.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:57:50.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:57:50.289058 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 24 11:57:50.432323 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (858) Jan 24 11:57:50.437756 kernel: BTRFS info (device vda6): first mount of filesystem 1b92a19b-e1e6-4749-8204-553c8c72e265 Jan 24 11:57:50.437808 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 11:57:50.458013 kernel: BTRFS info (device vda6): turning on async discard Jan 24 11:57:50.458108 kernel: BTRFS info (device vda6): enabling free space tree Jan 24 11:57:50.495824 kernel: BTRFS info (device vda6): last unmount of filesystem 1b92a19b-e1e6-4749-8204-553c8c72e265 Jan 24 11:57:50.525110 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 24 11:57:50.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:57:50.557174 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 24 11:57:52.040348 ignition[877]: Ignition 2.24.0 Jan 24 11:57:52.040391 ignition[877]: Stage: fetch-offline Jan 24 11:57:52.040880 ignition[877]: no configs at "/usr/lib/ignition/base.d" Jan 24 11:57:52.040904 ignition[877]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 11:57:52.041893 ignition[877]: parsed url from cmdline: "" Jan 24 11:57:52.041900 ignition[877]: no config URL provided Jan 24 11:57:52.042617 ignition[877]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 11:57:52.042684 ignition[877]: no config at "/usr/lib/ignition/user.ign" Jan 24 11:57:52.043048 ignition[877]: op(1): [started] loading QEMU firmware config module Jan 24 11:57:52.043056 ignition[877]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 24 11:57:52.100954 ignition[877]: op(1): [finished] loading QEMU firmware config module Jan 24 11:57:52.470103 ignition[877]: parsing config with SHA512: aa79a610a8cc28abb611e804aaad21ef79ee8f3b45a142ba9fcf3a1117b40ffa2dbefa52b39fa28be9e18464235b931b1eab935cc45dae0c8055a41b9832596d Jan 24 11:57:52.625959 unknown[877]: fetched base config from "system" Jan 24 11:57:52.627348 unknown[877]: fetched user config from "qemu" Jan 24 11:57:52.647626 ignition[877]: fetch-offline: fetch-offline passed Jan 24 11:57:52.666137 ignition[877]: Ignition finished successfully Jan 24 11:57:52.689795 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 11:57:52.709063 kernel: kauditd_printk_skb: 18 callbacks suppressed Jan 24 11:57:52.709096 kernel: audit: type=1130 audit(1769255872.701:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:57:52.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:57:52.726991 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 24 11:57:52.739470 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 24 11:57:53.001406 ignition[887]: Ignition 2.24.0 Jan 24 11:57:53.001757 ignition[887]: Stage: kargs Jan 24 11:57:53.006536 ignition[887]: no configs at "/usr/lib/ignition/base.d" Jan 24 11:57:53.006620 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 11:57:53.020066 ignition[887]: kargs: kargs passed Jan 24 11:57:53.020164 ignition[887]: Ignition finished successfully Jan 24 11:57:53.031813 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 24 11:57:53.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:57:53.048018 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 24 11:57:53.090999 kernel: audit: type=1130 audit(1769255873.045:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:57:53.144957 ignition[895]: Ignition 2.24.0 Jan 24 11:57:53.145003 ignition[895]: Stage: disks Jan 24 11:57:53.145193 ignition[895]: no configs at "/usr/lib/ignition/base.d" Jan 24 11:57:53.145210 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 11:57:53.158288 ignition[895]: disks: disks passed Jan 24 11:57:53.171112 ignition[895]: Ignition finished successfully Jan 24 11:57:53.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:57:53.176890 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 24 11:57:53.228322 kernel: audit: type=1130 audit(1769255873.182:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:57:53.182738 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 24 11:57:53.202193 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 11:57:53.202328 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 11:57:53.206453 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 11:57:53.208122 systemd[1]: Reached target basic.target - Basic System. Jan 24 11:57:53.219149 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 24 11:57:53.576807 systemd-fsck[904]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 24 11:57:53.595140 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 24 11:57:53.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:57:53.610473 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 24 11:57:53.628735 kernel: audit: type=1130 audit(1769255873.608:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:57:54.049616 kernel: EXT4-fs (vda9): mounted filesystem 04920273-eebf-4ad5-828c-7340043c8075 r/w with ordered data mode. Quota mode: none. Jan 24 11:57:54.052021 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 24 11:57:54.064889 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 24 11:57:54.136919 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 11:57:54.179040 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 24 11:57:54.190280 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 24 11:57:54.190347 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 24 11:57:54.241271 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (913) Jan 24 11:57:54.190382 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 11:57:54.261885 kernel: BTRFS info (device vda6): first mount of filesystem 1b92a19b-e1e6-4749-8204-553c8c72e265 Jan 24 11:57:54.262004 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 11:57:54.231532 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 24 11:57:54.321443 kernel: BTRFS info (device vda6): turning on async discard Jan 24 11:57:54.322232 kernel: BTRFS info (device vda6): enabling free space tree Jan 24 11:57:54.317812 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 24 11:57:54.336229 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 11:57:54.906343 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 24 11:57:54.935355 kernel: audit: type=1130 audit(1769255874.915:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:57:54.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:57:54.918120 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 24 11:57:54.951041 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 24 11:57:55.005412 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 24 11:57:55.023353 kernel: BTRFS info (device vda6): last unmount of filesystem 1b92a19b-e1e6-4749-8204-553c8c72e265 Jan 24 11:57:55.088615 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 24 11:57:55.110983 kernel: audit: type=1130 audit(1769255875.093:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:57:55.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:57:55.541342 ignition[1011]: INFO : Ignition 2.24.0 Jan 24 11:57:55.541342 ignition[1011]: INFO : Stage: mount Jan 24 11:57:55.541342 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 11:57:55.541342 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 11:57:55.571060 ignition[1011]: INFO : mount: mount passed Jan 24 11:57:55.571060 ignition[1011]: INFO : Ignition finished successfully Jan 24 11:57:55.572841 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 24 11:57:55.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:57:55.587284 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 24 11:57:55.612120 kernel: audit: type=1130 audit(1769255875.584:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:57:55.643877 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 11:57:55.714935 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1022) Jan 24 11:57:55.725181 kernel: BTRFS info (device vda6): first mount of filesystem 1b92a19b-e1e6-4749-8204-553c8c72e265 Jan 24 11:57:55.725274 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 11:57:55.750178 kernel: BTRFS info (device vda6): turning on async discard Jan 24 11:57:55.750255 kernel: BTRFS info (device vda6): enabling free space tree Jan 24 11:57:55.762930 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 11:57:56.060935 ignition[1039]: INFO : Ignition 2.24.0 Jan 24 11:57:56.060935 ignition[1039]: INFO : Stage: files Jan 24 11:57:56.074238 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 11:57:56.074238 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 11:57:56.086729 ignition[1039]: DEBUG : files: compiled without relabeling support, skipping Jan 24 11:57:56.101137 ignition[1039]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 24 11:57:56.101137 ignition[1039]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 24 11:57:56.132917 ignition[1039]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 24 11:57:56.139045 ignition[1039]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 24 11:57:56.149049 unknown[1039]: wrote ssh authorized keys file for user: core Jan 24 11:57:56.159127 ignition[1039]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 24 11:57:56.166058 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 24 11:57:56.166058 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 24 11:57:56.272631 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 24 11:57:57.638995 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 24 11:57:57.638995 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 24 11:57:57.657392 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 24 11:57:57.657392 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 24 11:57:57.657392 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 24 11:57:57.657392 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 11:57:57.657392 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 11:57:57.657392 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 11:57:57.657392 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 11:57:57.702069 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 11:57:57.714800 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 11:57:57.714800 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 11:57:57.714800 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 11:57:57.714800 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 11:57:57.714800 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 24 11:57:58.168511 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 24 11:57:59.900242 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 11:57:59.900242 ignition[1039]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 24 11:57:59.916285 ignition[1039]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 11:57:59.942614 ignition[1039]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 11:57:59.942614 ignition[1039]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 24 11:57:59.942614 ignition[1039]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 24 11:57:59.942614 ignition[1039]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 24 11:57:59.942614 ignition[1039]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 24 11:57:59.942614 ignition[1039]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 24 11:57:59.942614 ignition[1039]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 24 11:58:00.018781 ignition[1039]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 24 11:58:00.033175 ignition[1039]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 24 11:58:00.033175 ignition[1039]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 24 11:58:00.033175 ignition[1039]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 24 11:58:00.033175 ignition[1039]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 24 11:58:00.058741 ignition[1039]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 24 11:58:00.058741 ignition[1039]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 24 11:58:00.058741 ignition[1039]: INFO : files: files passed Jan 24 11:58:00.058741 ignition[1039]: INFO : Ignition finished successfully Jan 24 11:58:00.111414 kernel: audit: type=1130 audit(1769255880.073:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:00.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:00.051037 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 24 11:58:00.079856 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 24 11:58:00.108957 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 24 11:58:00.166752 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 24 11:58:00.167607 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 24 11:58:00.202087 kernel: audit: type=1130 audit(1769255880.171:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:00.202124 kernel: audit: type=1131 audit(1769255880.171:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:00.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:00.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:00.202289 initrd-setup-root-after-ignition[1070]: grep: /sysroot/oem/oem-release: No such file or directory Jan 24 11:58:00.207295 initrd-setup-root-after-ignition[1072]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 11:58:00.207295 initrd-setup-root-after-ignition[1072]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 24 11:58:00.223262 initrd-setup-root-after-ignition[1076]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 11:58:00.233593 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 11:58:00.253633 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 24 11:58:00.278112 kernel: audit: type=1130 audit(1769255880.251:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:00.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:00.272740 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 24 11:58:00.401904 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 24 11:58:00.438813 kernel: audit: type=1130 audit(1769255880.408:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:00.438938 kernel: audit: type=1131 audit(1769255880.410:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:00.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:00.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:00.402125 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 24 11:58:00.410612 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 24 11:58:00.447198 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 24 11:58:00.453417 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 24 11:58:00.455193 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 24 11:58:00.537134 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 11:58:00.566524 kernel: audit: type=1130 audit(1769255880.545:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:00.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:00.549848 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 24 11:58:00.600429 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 24 11:58:00.601250 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 24 11:58:00.611756 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 11:58:00.618130 systemd[1]: Stopped target timers.target - Timer Units. Jan 24 11:58:00.662316 kernel: audit: type=1131 audit(1769255880.643:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:00.643000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:00.626834 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 24 11:58:00.627056 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 11:58:00.662805 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 24 11:58:00.681396 systemd[1]: Stopped target basic.target - Basic System. Jan 24 11:58:00.686182 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 24 11:58:00.700870 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 11:58:00.708025 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 24 11:58:00.720779 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 24 11:58:00.733821 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 24 11:58:00.747487 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 11:58:00.760279 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 24 11:58:00.768659 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 24 11:58:00.788215 systemd[1]: Stopped target swap.target - Swaps. Jan 24 11:58:00.799815 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 24 11:58:00.829481 kernel: audit: type=1131 audit(1769255880.814:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:00.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:00.800240 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 24 11:58:00.831163 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 24 11:58:00.849643 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 11:58:00.863887 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 24 11:58:00.872023 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 11:58:00.889826 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 24 11:58:00.890246 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 24 11:58:00.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:00.916638 kernel: audit: type=1131 audit(1769255880.906:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:00.917484 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 24 11:58:00.918109 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 11:58:00.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:00.941073 systemd[1]: Stopped target paths.target - Path Units. Jan 24 11:58:00.947417 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 24 11:58:00.951509 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 11:58:00.961977 systemd[1]: Stopped target slices.target - Slice Units. Jan 24 11:58:00.971019 systemd[1]: Stopped target sockets.target - Socket Units. Jan 24 11:58:00.986148 systemd[1]: iscsid.socket: Deactivated successfully. Jan 24 11:58:00.986338 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 11:58:01.005153 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 24 11:58:01.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:01.005321 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 11:58:01.017241 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 24 11:58:01.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:01.017478 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 24 11:58:01.031717 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 24 11:58:01.031958 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 11:58:01.043228 systemd[1]: ignition-files.service: Deactivated successfully. Jan 24 11:58:01.043432 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 24 11:58:01.065746 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 24 11:58:01.093531 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 24 11:58:01.110892 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 24 11:58:01.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:01.119932 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 11:58:01.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:01.134362 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 24 11:58:01.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:01.134806 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 11:58:01.145499 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 24 11:58:01.145759 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 11:58:01.204661 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 24 11:58:01.381312 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 24 11:58:01.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:01.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:01.448271 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 24 11:58:01.469711 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 24 11:58:01.470745 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 24 11:58:01.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:01.825055 ignition[1096]: INFO : Ignition 2.24.0 Jan 24 11:58:01.825055 ignition[1096]: INFO : Stage: umount Jan 24 11:58:01.835040 ignition[1096]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 11:58:01.835040 ignition[1096]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 11:58:01.835040 ignition[1096]: INFO : umount: umount passed Jan 24 11:58:01.835040 ignition[1096]: INFO : Ignition finished successfully Jan 24 11:58:01.852143 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 24 11:58:01.852745 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 24 11:58:01.867045 systemd[1]: Stopped target network.target - Network. Jan 24 11:58:01.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:01.882793 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 24 11:58:01.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:01.882910 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 24 11:58:01.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:01.883227 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 24 11:58:01.883293 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 24 11:58:01.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:01.892515 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 24 11:58:01.892720 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 24 11:58:01.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:01.901629 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 24 11:58:01.922000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:01.902383 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 24 11:58:01.908716 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 24 11:58:01.908842 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 24 11:58:01.923139 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 24 11:58:01.926620 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 24 11:58:01.966046 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 24 11:58:01.975000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:01.966351 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 24 11:58:01.981494 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 24 11:58:01.987180 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 24 11:58:01.987249 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 24 11:58:02.015000 audit: BPF prog-id=9 op=UNLOAD Jan 24 11:58:01.992076 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 24 11:58:02.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:02.017502 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 24 11:58:02.038254 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 11:58:02.042824 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 11:58:02.148343 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 24 11:58:02.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:02.148531 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 24 11:58:02.202033 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 24 11:58:02.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:02.202341 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 11:58:02.212000 audit: BPF prog-id=6 op=UNLOAD Jan 24 11:58:02.212537 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 24 11:58:02.212783 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 24 11:58:02.221163 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 24 11:58:02.221232 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 11:58:02.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:02.236192 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 24 11:58:02.236293 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 24 11:58:02.244285 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 24 11:58:02.244374 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 24 11:58:02.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:02.263765 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 11:58:02.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:02.263883 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 11:58:02.286057 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 24 11:58:02.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:02.290090 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 24 11:58:02.290172 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 11:58:02.290378 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 11:58:02.290451 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 11:58:02.314000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:02.314914 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 24 11:58:02.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:02.315031 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 24 11:58:02.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:02.322474 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 24 11:58:02.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:02.322658 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 11:58:02.390000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:02.327002 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 24 11:58:02.409000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:02.327084 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 11:58:02.348065 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 24 11:58:02.348183 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 11:58:02.391326 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 11:58:02.391464 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 11:58:02.581190 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 24 11:58:02.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:02.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:02.581478 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 24 11:58:02.693411 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 24 11:58:02.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:02.694803 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 24 11:58:02.703509 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 24 11:58:02.712897 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 24 11:58:02.781667 systemd[1]: Switching root. Jan 24 11:58:02.841338 systemd-journald[319]: Journal stopped Jan 24 11:58:07.075086 systemd-journald[319]: Received SIGTERM from PID 1 (systemd). Jan 24 11:58:07.075285 kernel: SELinux: policy capability network_peer_controls=1 Jan 24 11:58:07.075312 kernel: SELinux: policy capability open_perms=1 Jan 24 11:58:07.075335 kernel: SELinux: policy capability extended_socket_class=1 Jan 24 11:58:07.075369 kernel: SELinux: policy capability always_check_network=0 Jan 24 11:58:07.075393 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 24 11:58:07.075457 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 24 11:58:07.075482 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 24 11:58:07.076039 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 24 11:58:07.076106 kernel: SELinux: policy capability userspace_initial_context=0 Jan 24 11:58:07.076132 systemd[1]: Successfully loaded SELinux policy in 124.990ms. Jan 24 11:58:07.076188 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.975ms. Jan 24 11:58:07.076211 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 24 11:58:07.076229 systemd[1]: Detected virtualization kvm. Jan 24 11:58:07.076246 systemd[1]: Detected architecture x86-64. Jan 24 11:58:07.076312 systemd[1]: Detected first boot. Jan 24 11:58:07.076331 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 24 11:58:07.076349 zram_generator::config[1140]: No configuration found. Jan 24 11:58:07.076377 kernel: Guest personality initialized and is inactive Jan 24 11:58:07.076394 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 24 11:58:07.076411 kernel: Initialized host personality Jan 24 11:58:07.076427 kernel: NET: Registered PF_VSOCK protocol family Jan 24 11:58:07.076495 systemd[1]: Populated /etc with preset unit settings. Jan 24 11:58:07.076516 kernel: kauditd_printk_skb: 40 callbacks suppressed Jan 24 11:58:07.076535 kernel: audit: type=1334 audit(1769255885.345:87): prog-id=12 op=LOAD Jan 24 11:58:07.076618 kernel: audit: type=1334 audit(1769255885.347:88): prog-id=3 op=UNLOAD Jan 24 11:58:07.076642 kernel: audit: type=1334 audit(1769255885.363:89): prog-id=13 op=LOAD Jan 24 11:58:07.076664 kernel: audit: type=1334 audit(1769255885.364:90): prog-id=14 op=LOAD Jan 24 11:58:07.076685 kernel: audit: type=1334 audit(1769255885.364:91): prog-id=4 op=UNLOAD Jan 24 11:58:07.076792 kernel: audit: type=1334 audit(1769255885.364:92): prog-id=5 op=UNLOAD Jan 24 11:58:07.076849 kernel: audit: type=1131 audit(1769255885.392:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:07.076874 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 24 11:58:07.076925 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 24 11:58:07.076947 kernel: audit: type=1334 audit(1769255885.424:94): prog-id=12 op=UNLOAD Jan 24 11:58:07.076968 kernel: audit: type=1130 audit(1769255885.437:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:07.077032 kernel: audit: type=1131 audit(1769255885.437:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:07.077083 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 24 11:58:07.077111 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 24 11:58:07.077133 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 24 11:58:07.077151 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 24 11:58:07.077169 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 24 11:58:07.077236 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 24 11:58:07.077256 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 24 11:58:07.077275 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 24 11:58:07.077292 systemd[1]: Created slice user.slice - User and Session Slice. Jan 24 11:58:07.077312 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 11:58:07.077333 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 11:58:07.077354 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 24 11:58:07.077421 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 24 11:58:07.078807 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 24 11:58:07.078845 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 11:58:07.078866 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 24 11:58:07.078884 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 11:58:07.078901 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 11:58:07.078954 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 24 11:58:07.079021 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 24 11:58:07.079041 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 24 11:58:07.079059 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 24 11:58:07.079076 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 11:58:07.079095 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 11:58:07.079115 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 24 11:58:07.079132 systemd[1]: Reached target slices.target - Slice Units. Jan 24 11:58:07.079199 systemd[1]: Reached target swap.target - Swaps. Jan 24 11:58:07.079224 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 24 11:58:07.079244 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 24 11:58:07.079265 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 24 11:58:07.079282 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 24 11:58:07.079300 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 24 11:58:07.079318 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 11:58:07.079385 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 24 11:58:07.079406 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 24 11:58:07.079427 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 11:58:07.079448 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 11:58:07.079466 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 24 11:58:07.079478 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 24 11:58:07.079491 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 24 11:58:07.079583 systemd[1]: Mounting media.mount - External Media Directory... Jan 24 11:58:07.079604 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 11:58:07.079626 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 24 11:58:07.079641 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 24 11:58:07.079653 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 24 11:58:07.079666 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 24 11:58:07.079679 systemd[1]: Reached target machines.target - Containers. Jan 24 11:58:07.079745 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 24 11:58:07.079760 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 11:58:07.079773 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 11:58:07.079793 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 24 11:58:07.079814 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 11:58:07.079831 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 11:58:07.079849 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 11:58:07.079912 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 24 11:58:07.079931 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 11:58:07.079950 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 24 11:58:07.079967 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 24 11:58:07.079983 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 24 11:58:07.080000 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 24 11:58:07.080094 systemd[1]: Stopped systemd-fsck-usr.service. Jan 24 11:58:07.080145 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 24 11:58:07.080166 kernel: ACPI: bus type drm_connector registered Jan 24 11:58:07.080183 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 11:58:07.080247 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 11:58:07.080261 kernel: fuse: init (API version 7.41) Jan 24 11:58:07.080274 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 24 11:58:07.080286 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 24 11:58:07.080299 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 24 11:58:07.080312 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 11:58:07.080350 systemd-journald[1226]: Collecting audit messages is enabled. Jan 24 11:58:07.080409 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 11:58:07.080426 systemd-journald[1226]: Journal started Jan 24 11:58:07.080472 systemd-journald[1226]: Runtime Journal (/run/log/journal/27d534a7e1db44e38883bdb11c80e90e) is 6M, max 48.2M, 42.1M free. Jan 24 11:58:06.031000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 24 11:58:06.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:06.569000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:06.589000 audit: BPF prog-id=14 op=UNLOAD Jan 24 11:58:06.589000 audit: BPF prog-id=13 op=UNLOAD Jan 24 11:58:06.923000 audit: BPF prog-id=15 op=LOAD Jan 24 11:58:06.931000 audit: BPF prog-id=16 op=LOAD Jan 24 11:58:06.937000 audit: BPF prog-id=17 op=LOAD Jan 24 11:58:07.071000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 24 11:58:07.071000 audit[1226]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffca3b24430 a2=4000 a3=0 items=0 ppid=1 pid=1226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:07.071000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 24 11:58:05.210228 systemd[1]: Queued start job for default target multi-user.target. Jan 24 11:58:05.378166 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 24 11:58:05.383120 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 24 11:58:05.393948 systemd[1]: systemd-journald.service: Consumed 1.462s CPU time. Jan 24 11:58:07.106489 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 11:58:07.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:07.108782 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 24 11:58:07.113535 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 24 11:58:07.121613 systemd[1]: Mounted media.mount - External Media Directory. Jan 24 11:58:07.127140 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 24 11:58:07.145165 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 24 11:58:07.198639 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 24 11:58:07.228514 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 24 11:58:07.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:07.253519 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 11:58:07.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:07.308052 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 24 11:58:07.313468 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 24 11:58:07.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:07.331000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:07.332693 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 11:58:07.333258 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 11:58:07.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:07.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:07.344126 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 11:58:07.345055 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 11:58:07.352259 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 11:58:07.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:07.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:07.352867 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 11:58:07.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:07.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:07.371276 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 24 11:58:07.372883 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 24 11:58:07.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:07.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:07.380768 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 11:58:07.381442 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 11:58:07.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:07.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:07.389019 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 11:58:07.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:07.398337 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 11:58:07.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:07.407238 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 24 11:58:07.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:07.415095 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 24 11:58:07.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:07.439687 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 24 11:58:07.446407 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 24 11:58:07.467279 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 24 11:58:07.477152 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 24 11:58:07.481457 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 24 11:58:07.481505 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 11:58:07.487147 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 24 11:58:07.493993 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 11:58:07.494223 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 24 11:58:07.510260 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 24 11:58:07.518675 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 24 11:58:07.524307 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 11:58:07.527971 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 24 11:58:07.534369 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 11:58:07.571343 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 11:58:07.626107 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 24 11:58:07.647973 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 11:58:07.683834 systemd-journald[1226]: Time spent on flushing to /var/log/journal/27d534a7e1db44e38883bdb11c80e90e is 95.574ms for 1114 entries. Jan 24 11:58:07.683834 systemd-journald[1226]: System Journal (/var/log/journal/27d534a7e1db44e38883bdb11c80e90e) is 8M, max 163.5M, 155.5M free. Jan 24 11:58:07.881633 systemd-journald[1226]: Received client request to flush runtime journal. Jan 24 11:58:07.883064 kernel: loop1: detected capacity change from 0 to 111560 Jan 24 11:58:07.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:07.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:07.691772 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 11:58:07.741403 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 24 11:58:07.783954 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 24 11:58:07.791112 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 24 11:58:07.805049 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 24 11:58:07.814835 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 24 11:58:07.886697 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 24 11:58:07.887354 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jan 24 11:58:07.887400 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jan 24 11:58:07.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:07.894841 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 11:58:07.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:07.906948 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 11:58:07.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:07.926247 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 24 11:58:07.939630 kernel: loop2: detected capacity change from 0 to 50784 Jan 24 11:58:07.945026 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 24 11:58:07.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:08.128279 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 24 11:58:08.148616 kernel: loop3: detected capacity change from 0 to 219144 Jan 24 11:58:08.151054 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 24 11:58:08.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:08.168000 audit: BPF prog-id=18 op=LOAD Jan 24 11:58:08.169000 audit: BPF prog-id=19 op=LOAD Jan 24 11:58:08.169000 audit: BPF prog-id=20 op=LOAD Jan 24 11:58:08.171881 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 24 11:58:08.177000 audit: BPF prog-id=21 op=LOAD Jan 24 11:58:08.182220 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 11:58:08.190045 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 11:58:08.198000 audit: BPF prog-id=22 op=LOAD Jan 24 11:58:08.198000 audit: BPF prog-id=23 op=LOAD Jan 24 11:58:08.198000 audit: BPF prog-id=24 op=LOAD Jan 24 11:58:08.200378 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 24 11:58:08.205000 audit: BPF prog-id=25 op=LOAD Jan 24 11:58:08.214000 audit: BPF prog-id=26 op=LOAD Jan 24 11:58:08.214000 audit: BPF prog-id=27 op=LOAD Jan 24 11:58:08.217258 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 24 11:58:08.281703 kernel: loop4: detected capacity change from 0 to 111560 Jan 24 11:58:08.402886 kernel: loop5: detected capacity change from 0 to 50784 Jan 24 11:58:08.409395 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Jan 24 11:58:08.414161 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Jan 24 11:58:08.539502 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 11:58:08.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:08.581176 kernel: loop6: detected capacity change from 0 to 219144 Jan 24 11:58:08.777429 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 24 11:58:08.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:08.799835 (sd-merge)[1290]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 24 11:58:08.929906 systemd-nsresourced[1286]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 24 11:58:08.994153 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 24 11:58:08.997139 (sd-merge)[1290]: Merged extensions into '/usr'. Jan 24 11:58:08.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:09.006223 systemd[1]: Reload requested from client PID 1260 ('systemd-sysext') (unit systemd-sysext.service)... Jan 24 11:58:09.006247 systemd[1]: Reloading... Jan 24 11:58:09.285606 zram_generator::config[1332]: No configuration found. Jan 24 11:58:09.320175 systemd-oomd[1283]: No swap; memory pressure usage will be degraded Jan 24 11:58:09.375476 systemd-resolved[1284]: Positive Trust Anchors: Jan 24 11:58:09.376152 systemd-resolved[1284]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 11:58:09.376250 systemd-resolved[1284]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 24 11:58:09.376301 systemd-resolved[1284]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 11:58:09.391448 systemd-resolved[1284]: Defaulting to hostname 'linux'. Jan 24 11:58:10.010252 systemd[1]: Reloading finished in 1003 ms. Jan 24 11:58:10.039189 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 24 11:58:10.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:10.044967 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 11:58:10.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:10.049477 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 24 11:58:10.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:10.068457 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 11:58:10.085496 systemd[1]: Starting ensure-sysext.service... Jan 24 11:58:10.090778 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 11:58:10.095000 audit: BPF prog-id=28 op=LOAD Jan 24 11:58:10.095000 audit: BPF prog-id=22 op=UNLOAD Jan 24 11:58:10.095000 audit: BPF prog-id=29 op=LOAD Jan 24 11:58:10.095000 audit: BPF prog-id=30 op=LOAD Jan 24 11:58:10.095000 audit: BPF prog-id=23 op=UNLOAD Jan 24 11:58:10.095000 audit: BPF prog-id=24 op=UNLOAD Jan 24 11:58:10.096000 audit: BPF prog-id=31 op=LOAD Jan 24 11:58:10.096000 audit: BPF prog-id=25 op=UNLOAD Jan 24 11:58:10.096000 audit: BPF prog-id=32 op=LOAD Jan 24 11:58:10.096000 audit: BPF prog-id=33 op=LOAD Jan 24 11:58:10.096000 audit: BPF prog-id=26 op=UNLOAD Jan 24 11:58:10.096000 audit: BPF prog-id=27 op=UNLOAD Jan 24 11:58:10.099000 audit: BPF prog-id=34 op=LOAD Jan 24 11:58:10.099000 audit: BPF prog-id=15 op=UNLOAD Jan 24 11:58:10.099000 audit: BPF prog-id=35 op=LOAD Jan 24 11:58:10.099000 audit: BPF prog-id=36 op=LOAD Jan 24 11:58:10.100000 audit: BPF prog-id=16 op=UNLOAD Jan 24 11:58:10.100000 audit: BPF prog-id=17 op=UNLOAD Jan 24 11:58:10.102000 audit: BPF prog-id=37 op=LOAD Jan 24 11:58:10.102000 audit: BPF prog-id=21 op=UNLOAD Jan 24 11:58:10.103000 audit: BPF prog-id=38 op=LOAD Jan 24 11:58:10.104000 audit: BPF prog-id=18 op=UNLOAD Jan 24 11:58:10.104000 audit: BPF prog-id=39 op=LOAD Jan 24 11:58:10.104000 audit: BPF prog-id=40 op=LOAD Jan 24 11:58:10.104000 audit: BPF prog-id=19 op=UNLOAD Jan 24 11:58:10.104000 audit: BPF prog-id=20 op=UNLOAD Jan 24 11:58:10.142686 systemd[1]: Reload requested from client PID 1369 ('systemctl') (unit ensure-sysext.service)... Jan 24 11:58:10.143785 systemd[1]: Reloading... Jan 24 11:58:10.922690 systemd-tmpfiles[1370]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 24 11:58:10.923965 systemd-tmpfiles[1370]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 24 11:58:10.924990 systemd-tmpfiles[1370]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 24 11:58:10.928935 systemd-tmpfiles[1370]: ACLs are not supported, ignoring. Jan 24 11:58:10.929433 systemd-tmpfiles[1370]: ACLs are not supported, ignoring. Jan 24 11:58:10.955081 systemd-tmpfiles[1370]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 11:58:10.955254 systemd-tmpfiles[1370]: Skipping /boot Jan 24 11:58:11.019473 systemd-tmpfiles[1370]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 11:58:11.019530 systemd-tmpfiles[1370]: Skipping /boot Jan 24 11:58:11.047876 zram_generator::config[1400]: No configuration found. Jan 24 11:58:11.774653 systemd[1]: Reloading finished in 1630 ms. Jan 24 11:58:11.805256 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 24 11:58:11.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:11.816905 kernel: kauditd_printk_skb: 79 callbacks suppressed Jan 24 11:58:11.817003 kernel: audit: type=1130 audit(1769255891.811:174): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:11.834026 kernel: audit: type=1334 audit(1769255891.815:175): prog-id=41 op=LOAD Jan 24 11:58:11.835287 kernel: audit: type=1334 audit(1769255891.815:176): prog-id=31 op=UNLOAD Jan 24 11:58:11.815000 audit: BPF prog-id=41 op=LOAD Jan 24 11:58:11.815000 audit: BPF prog-id=31 op=UNLOAD Jan 24 11:58:11.845966 kernel: audit: type=1334 audit(1769255891.815:177): prog-id=42 op=LOAD Jan 24 11:58:11.846130 kernel: audit: type=1334 audit(1769255891.815:178): prog-id=43 op=LOAD Jan 24 11:58:11.815000 audit: BPF prog-id=42 op=LOAD Jan 24 11:58:11.815000 audit: BPF prog-id=43 op=LOAD Jan 24 11:58:11.848703 kernel: audit: type=1334 audit(1769255891.815:179): prog-id=32 op=UNLOAD Jan 24 11:58:11.815000 audit: BPF prog-id=32 op=UNLOAD Jan 24 11:58:11.853121 kernel: audit: type=1334 audit(1769255891.815:180): prog-id=33 op=UNLOAD Jan 24 11:58:11.815000 audit: BPF prog-id=33 op=UNLOAD Jan 24 11:58:11.817000 audit: BPF prog-id=44 op=LOAD Jan 24 11:58:11.877484 kernel: audit: type=1334 audit(1769255891.817:181): prog-id=44 op=LOAD Jan 24 11:58:11.877704 kernel: audit: type=1334 audit(1769255891.817:182): prog-id=34 op=UNLOAD Jan 24 11:58:11.817000 audit: BPF prog-id=34 op=UNLOAD Jan 24 11:58:11.881681 kernel: audit: type=1334 audit(1769255891.818:183): prog-id=45 op=LOAD Jan 24 11:58:11.818000 audit: BPF prog-id=45 op=LOAD Jan 24 11:58:11.818000 audit: BPF prog-id=46 op=LOAD Jan 24 11:58:11.818000 audit: BPF prog-id=35 op=UNLOAD Jan 24 11:58:11.818000 audit: BPF prog-id=36 op=UNLOAD Jan 24 11:58:11.825000 audit: BPF prog-id=47 op=LOAD Jan 24 11:58:11.837000 audit: BPF prog-id=38 op=UNLOAD Jan 24 11:58:11.845000 audit: BPF prog-id=48 op=LOAD Jan 24 11:58:11.846000 audit: BPF prog-id=49 op=LOAD Jan 24 11:58:11.846000 audit: BPF prog-id=39 op=UNLOAD Jan 24 11:58:11.846000 audit: BPF prog-id=40 op=UNLOAD Jan 24 11:58:12.336000 audit: BPF prog-id=50 op=LOAD Jan 24 11:58:12.336000 audit: BPF prog-id=37 op=UNLOAD Jan 24 11:58:12.344000 audit: BPF prog-id=51 op=LOAD Jan 24 11:58:12.374000 audit: BPF prog-id=28 op=UNLOAD Jan 24 11:58:12.376000 audit: BPF prog-id=52 op=LOAD Jan 24 11:58:12.376000 audit: BPF prog-id=53 op=LOAD Jan 24 11:58:12.376000 audit: BPF prog-id=29 op=UNLOAD Jan 24 11:58:12.376000 audit: BPF prog-id=30 op=UNLOAD Jan 24 11:58:12.383174 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 11:58:12.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:12.416507 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 24 11:58:12.425210 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 24 11:58:12.438695 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 24 11:58:12.448074 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 24 11:58:12.462000 audit: BPF prog-id=8 op=UNLOAD Jan 24 11:58:12.462000 audit: BPF prog-id=7 op=UNLOAD Jan 24 11:58:12.479000 audit: BPF prog-id=54 op=LOAD Jan 24 11:58:12.479000 audit: BPF prog-id=55 op=LOAD Jan 24 11:58:12.481266 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 11:58:12.488270 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 24 11:58:12.496910 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 11:58:12.497135 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 11:58:12.499126 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 11:58:12.517187 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 11:58:12.531000 audit[1452]: SYSTEM_BOOT pid=1452 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 24 11:58:12.532147 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 11:58:12.548627 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 11:58:12.597136 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 24 11:58:12.623474 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 24 11:58:12.624447 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 11:58:12.682469 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 11:58:12.683297 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 11:58:12.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:12.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:12.694842 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 11:58:12.695439 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 11:58:12.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:12.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:12.793939 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 24 11:58:12.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:12.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:12.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:12.819851 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 11:58:12.820385 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 11:58:12.835381 systemd-udevd[1451]: Using default interface naming scheme 'v257'. Jan 24 11:58:12.840716 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 24 11:58:12.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:12.866000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 24 11:58:12.866000 audit[1473]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdd9b46050 a2=420 a3=0 items=0 ppid=1441 pid=1473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:12.866000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 24 11:58:12.867859 augenrules[1473]: No rules Jan 24 11:58:12.869861 systemd[1]: audit-rules.service: Deactivated successfully. Jan 24 11:58:12.870230 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 24 11:58:12.880852 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 11:58:12.881143 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 11:58:12.884334 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 11:58:12.891402 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 11:58:12.902107 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 11:58:12.907180 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 11:58:12.907703 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 24 11:58:12.907980 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 24 11:58:12.908180 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 11:58:12.910769 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 11:58:12.916625 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 11:58:12.924205 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 11:58:12.929854 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 11:58:12.930231 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 11:58:12.936330 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 11:58:12.936959 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 11:58:12.950785 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 11:58:12.964936 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 24 11:58:12.972029 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 11:58:12.974082 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 11:58:12.983989 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 11:58:12.996145 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 11:58:13.005046 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 11:58:13.012460 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 11:58:13.012884 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 24 11:58:13.013024 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 24 11:58:13.032075 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 11:58:13.038902 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 11:58:13.072901 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 24 11:58:13.079480 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 11:58:13.085151 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 11:58:13.091080 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 11:58:13.091965 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 11:58:13.097900 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 11:58:13.098669 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 11:58:13.114025 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 11:58:13.114462 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 11:58:13.130163 systemd[1]: Finished ensure-sysext.service. Jan 24 11:58:13.143770 augenrules[1500]: /sbin/augenrules: No change Jan 24 11:58:13.158413 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 24 11:58:13.159040 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 11:58:13.159161 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 11:58:13.175000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 24 11:58:13.175000 audit[1539]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcb02c8560 a2=420 a3=0 items=0 ppid=1500 pid=1539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:13.175000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 24 11:58:13.175000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 24 11:58:13.175000 audit[1539]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcb02ca9f0 a2=420 a3=0 items=0 ppid=1500 pid=1539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:13.175000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 24 11:58:13.176695 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 24 11:58:13.184353 augenrules[1539]: No rules Jan 24 11:58:13.182023 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 24 11:58:13.186331 systemd[1]: audit-rules.service: Deactivated successfully. Jan 24 11:58:13.186987 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 24 11:58:13.774439 systemd-networkd[1510]: lo: Link UP Jan 24 11:58:13.774472 systemd-networkd[1510]: lo: Gained carrier Jan 24 11:58:13.779326 systemd-networkd[1510]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 24 11:58:13.779364 systemd-networkd[1510]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 11:58:13.781520 systemd-networkd[1510]: eth0: Link UP Jan 24 11:58:13.784276 systemd-networkd[1510]: eth0: Gained carrier Jan 24 11:58:13.784324 systemd-networkd[1510]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 24 11:58:13.789926 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 24 11:58:13.795964 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 11:58:13.809013 systemd[1]: Reached target network.target - Network. Jan 24 11:58:13.812683 systemd-networkd[1510]: eth0: DHCPv4 address 10.0.0.100/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 24 11:58:13.817182 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 24 11:58:13.827231 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 24 11:58:13.836370 systemd-timesyncd[1538]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 24 11:58:13.836613 systemd-timesyncd[1538]: Initial clock synchronization to Sat 2026-01-24 11:58:14.086940 UTC. Jan 24 11:58:13.836865 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 24 11:58:13.843233 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 24 11:58:13.850091 systemd[1]: Reached target time-set.target - System Time Set. Jan 24 11:58:14.119984 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 24 11:58:14.129504 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 24 11:58:14.194884 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 24 11:58:14.194918 kernel: ACPI: button: Power Button [PWRF] Jan 24 11:58:14.194940 kernel: mousedev: PS/2 mouse device common for all mice Jan 24 11:58:14.201549 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 24 11:58:14.587002 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 24 11:58:15.656543 systemd-networkd[1510]: eth0: Gained IPv6LL Jan 24 11:58:15.996791 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 24 11:58:16.030463 systemd[1]: Reached target network-online.target - Network is Online. Jan 24 11:58:16.048726 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 11:58:16.394144 ldconfig[1443]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 24 11:58:16.395068 kernel: kvm_amd: TSC scaling supported Jan 24 11:58:16.395121 kernel: kvm_amd: Nested Virtualization enabled Jan 24 11:58:16.395148 kernel: kvm_amd: Nested Paging enabled Jan 24 11:58:16.399267 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 24 11:58:16.399388 kernel: kvm_amd: PMU virtualization is disabled Jan 24 11:58:16.410850 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 24 11:58:16.498748 kernel: EDAC MC: Ver: 3.0.0 Jan 24 11:58:16.858843 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 11:58:16.876484 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 24 11:58:16.956737 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 24 11:58:16.962083 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 11:58:16.967846 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 24 11:58:16.971888 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 24 11:58:16.976810 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 24 11:58:16.981863 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 24 11:58:16.985532 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 24 11:58:16.990648 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 24 11:58:16.995110 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 24 11:58:16.998784 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 24 11:58:17.002977 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 24 11:58:17.003067 systemd[1]: Reached target paths.target - Path Units. Jan 24 11:58:17.006363 systemd[1]: Reached target timers.target - Timer Units. Jan 24 11:58:17.015625 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 24 11:58:17.024071 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 24 11:58:17.030266 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 24 11:58:17.035011 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 24 11:58:17.056154 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 24 11:58:17.489107 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 24 11:58:17.496224 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 24 11:58:17.501210 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 24 11:58:17.507348 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 11:58:17.510887 systemd[1]: Reached target basic.target - Basic System. Jan 24 11:58:17.514376 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 24 11:58:17.514510 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 24 11:58:17.518223 systemd[1]: Starting containerd.service - containerd container runtime... Jan 24 11:58:17.525263 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 24 11:58:17.532477 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 24 11:58:17.545356 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 24 11:58:17.570149 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 24 11:58:17.578147 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 24 11:58:17.583030 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 24 11:58:17.597165 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 24 11:58:17.607859 jq[1592]: false Jan 24 11:58:17.607717 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 11:58:17.615685 extend-filesystems[1593]: Found /dev/vda6 Jan 24 11:58:17.626833 extend-filesystems[1593]: Found /dev/vda9 Jan 24 11:58:17.616755 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 24 11:58:17.634199 extend-filesystems[1593]: Checking size of /dev/vda9 Jan 24 11:58:17.637063 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 24 11:58:17.673932 extend-filesystems[1593]: Resized partition /dev/vda9 Jan 24 11:58:17.677374 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 24 11:58:17.685982 extend-filesystems[1610]: resize2fs 1.47.3 (8-Jul-2025) Jan 24 11:58:17.694006 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 24 11:58:17.737256 google_oslogin_nss_cache[1594]: oslogin_cache_refresh[1594]: Refreshing passwd entry cache Jan 24 11:58:17.780367 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 24 11:58:17.737274 oslogin_cache_refresh[1594]: Refreshing passwd entry cache Jan 24 11:58:17.935067 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 24 11:58:17.935858 google_oslogin_nss_cache[1594]: oslogin_cache_refresh[1594]: Failure getting users, quitting Jan 24 11:58:17.935858 google_oslogin_nss_cache[1594]: oslogin_cache_refresh[1594]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 24 11:58:17.935858 google_oslogin_nss_cache[1594]: oslogin_cache_refresh[1594]: Refreshing group entry cache Jan 24 11:58:17.851448 oslogin_cache_refresh[1594]: Failure getting users, quitting Jan 24 11:58:17.864682 oslogin_cache_refresh[1594]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 24 11:58:17.868469 oslogin_cache_refresh[1594]: Refreshing group entry cache Jan 24 11:58:17.938091 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 24 11:58:17.945887 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 24 11:58:17.989157 google_oslogin_nss_cache[1594]: oslogin_cache_refresh[1594]: Failure getting groups, quitting Jan 24 11:58:17.989157 google_oslogin_nss_cache[1594]: oslogin_cache_refresh[1594]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 24 11:58:17.989140 oslogin_cache_refresh[1594]: Failure getting groups, quitting Jan 24 11:58:17.989175 oslogin_cache_refresh[1594]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 24 11:58:17.993966 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 24 11:58:18.002280 systemd[1]: Starting update-engine.service - Update Engine... Jan 24 11:58:18.107126 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 24 11:58:18.231629 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 24 11:58:18.307150 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 24 11:58:18.350131 extend-filesystems[1610]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 24 11:58:18.350131 extend-filesystems[1610]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 24 11:58:18.350131 extend-filesystems[1610]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 24 11:58:18.318759 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 24 11:58:18.431323 extend-filesystems[1593]: Resized filesystem in /dev/vda9 Jan 24 11:58:18.436675 update_engine[1620]: I20260124 11:58:18.421257 1620 main.cc:92] Flatcar Update Engine starting Jan 24 11:58:18.319373 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 24 11:58:18.320293 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 24 11:58:18.321121 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 24 11:58:18.331048 systemd[1]: motdgen.service: Deactivated successfully. Jan 24 11:58:18.331915 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 24 11:58:18.339794 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 24 11:58:18.340221 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 24 11:58:18.352487 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 24 11:58:18.355767 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 24 11:58:18.394645 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 24 11:58:18.447794 jq[1621]: true Jan 24 11:58:18.472441 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 24 11:58:18.476536 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 24 11:58:18.497226 jq[1656]: true Jan 24 11:58:18.529316 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 24 11:58:18.537084 tar[1635]: linux-amd64/LICENSE Jan 24 11:58:18.537724 tar[1635]: linux-amd64/helm Jan 24 11:58:18.915923 dbus-daemon[1590]: [system] SELinux support is enabled Jan 24 11:58:18.920929 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 24 11:58:18.929744 systemd-logind[1619]: Watching system buttons on /dev/input/event2 (Power Button) Jan 24 11:58:18.930420 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 24 11:58:18.930495 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 24 11:58:18.933481 bash[1676]: Updated "/home/core/.ssh/authorized_keys" Jan 24 11:58:18.935173 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 24 11:58:18.935213 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 24 11:58:18.935714 systemd-logind[1619]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 24 11:58:18.937417 systemd-logind[1619]: New seat seat0. Jan 24 11:58:18.942953 update_engine[1620]: I20260124 11:58:18.942497 1620 update_check_scheduler.cc:74] Next update check in 7m17s Jan 24 11:58:18.943618 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 24 11:58:18.948131 systemd[1]: Started systemd-logind.service - User Login Management. Jan 24 11:58:18.956402 systemd[1]: Started update-engine.service - Update Engine. Jan 24 11:58:18.959726 dbus-daemon[1590]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 24 11:58:19.114044 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 24 11:58:19.129194 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 24 11:58:19.626389 sshd_keygen[1636]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 24 11:58:19.954613 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 24 11:58:19.964407 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 24 11:58:19.967899 locksmithd[1678]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 24 11:58:20.024053 systemd[1]: issuegen.service: Deactivated successfully. Jan 24 11:58:20.024457 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 24 11:58:20.037464 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 24 11:58:20.181233 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 24 11:58:20.201523 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 24 11:58:20.216153 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 24 11:58:20.222368 systemd[1]: Reached target getty.target - Login Prompts. Jan 24 11:58:20.690129 containerd[1652]: time="2026-01-24T11:58:20Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 24 11:58:20.691466 containerd[1652]: time="2026-01-24T11:58:20.691051715Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 24 11:58:20.853381 containerd[1652]: time="2026-01-24T11:58:20.846805517Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="210.204µs" Jan 24 11:58:20.853381 containerd[1652]: time="2026-01-24T11:58:20.847024121Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 24 11:58:20.853381 containerd[1652]: time="2026-01-24T11:58:20.847424248Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 24 11:58:20.853381 containerd[1652]: time="2026-01-24T11:58:20.847465069Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 24 11:58:20.853381 containerd[1652]: time="2026-01-24T11:58:20.848591395Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 24 11:58:20.853381 containerd[1652]: time="2026-01-24T11:58:20.848647933Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 24 11:58:20.853381 containerd[1652]: time="2026-01-24T11:58:20.848846573Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 24 11:58:20.853381 containerd[1652]: time="2026-01-24T11:58:20.848865806Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 24 11:58:20.853381 containerd[1652]: time="2026-01-24T11:58:20.849381092Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 24 11:58:20.853381 containerd[1652]: time="2026-01-24T11:58:20.849402650Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 24 11:58:20.853381 containerd[1652]: time="2026-01-24T11:58:20.849419444Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 24 11:58:20.853381 containerd[1652]: time="2026-01-24T11:58:20.849432703Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 24 11:58:20.856212 containerd[1652]: time="2026-01-24T11:58:20.854775301Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 24 11:58:20.856212 containerd[1652]: time="2026-01-24T11:58:20.854870456Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 24 11:58:20.856212 containerd[1652]: time="2026-01-24T11:58:20.855247916Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 24 11:58:20.860051 containerd[1652]: time="2026-01-24T11:58:20.859815892Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 24 11:58:20.860051 containerd[1652]: time="2026-01-24T11:58:20.859915142Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 24 11:58:20.860051 containerd[1652]: time="2026-01-24T11:58:20.859933338Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 24 11:58:20.860051 containerd[1652]: time="2026-01-24T11:58:20.860005766Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 24 11:58:20.861036 containerd[1652]: time="2026-01-24T11:58:20.860810996Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 24 11:58:20.861036 containerd[1652]: time="2026-01-24T11:58:20.861010348Z" level=info msg="metadata content store policy set" policy=shared Jan 24 11:58:20.873794 containerd[1652]: time="2026-01-24T11:58:20.873634524Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 24 11:58:20.873947 containerd[1652]: time="2026-01-24T11:58:20.873880722Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 24 11:58:20.874192 containerd[1652]: time="2026-01-24T11:58:20.874106925Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 24 11:58:20.874192 containerd[1652]: time="2026-01-24T11:58:20.874158404Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 24 11:58:20.874272 containerd[1652]: time="2026-01-24T11:58:20.874206866Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 24 11:58:20.874272 containerd[1652]: time="2026-01-24T11:58:20.874257968Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 24 11:58:20.874400 containerd[1652]: time="2026-01-24T11:58:20.874278512Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 24 11:58:20.874400 containerd[1652]: time="2026-01-24T11:58:20.874294990Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 24 11:58:20.874400 containerd[1652]: time="2026-01-24T11:58:20.874311621Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 24 11:58:20.874400 containerd[1652]: time="2026-01-24T11:58:20.874368718Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 24 11:58:20.874400 containerd[1652]: time="2026-01-24T11:58:20.874393609Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 24 11:58:20.874550 containerd[1652]: time="2026-01-24T11:58:20.874409621Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 24 11:58:20.874550 containerd[1652]: time="2026-01-24T11:58:20.874470467Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 24 11:58:20.874671 containerd[1652]: time="2026-01-24T11:58:20.874618725Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 24 11:58:20.875098 containerd[1652]: time="2026-01-24T11:58:20.875046751Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 24 11:58:20.875280 containerd[1652]: time="2026-01-24T11:58:20.875203006Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 24 11:58:20.875280 containerd[1652]: time="2026-01-24T11:58:20.875258061Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 24 11:58:20.875280 containerd[1652]: time="2026-01-24T11:58:20.875276703Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 24 11:58:20.875453 containerd[1652]: time="2026-01-24T11:58:20.875293852Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 24 11:58:20.875453 containerd[1652]: time="2026-01-24T11:58:20.875310139Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 24 11:58:20.875453 containerd[1652]: time="2026-01-24T11:58:20.875392685Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 24 11:58:20.875453 containerd[1652]: time="2026-01-24T11:58:20.875450595Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 24 11:58:20.875629 containerd[1652]: time="2026-01-24T11:58:20.875501556Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 24 11:58:20.878781 containerd[1652]: time="2026-01-24T11:58:20.878695587Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 24 11:58:20.878781 containerd[1652]: time="2026-01-24T11:58:20.878755092Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 24 11:58:20.878863 containerd[1652]: time="2026-01-24T11:58:20.878793668Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 24 11:58:20.879456 containerd[1652]: time="2026-01-24T11:58:20.879160775Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 24 11:58:20.879456 containerd[1652]: time="2026-01-24T11:58:20.879236688Z" level=info msg="Start snapshots syncer" Jan 24 11:58:20.879456 containerd[1652]: time="2026-01-24T11:58:20.879388086Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 24 11:58:20.883500 containerd[1652]: time="2026-01-24T11:58:20.883251321Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 24 11:58:20.883500 containerd[1652]: time="2026-01-24T11:58:20.883496584Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 24 11:58:20.884296 containerd[1652]: time="2026-01-24T11:58:20.883814062Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 24 11:58:20.884296 containerd[1652]: time="2026-01-24T11:58:20.884051462Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 24 11:58:20.884296 containerd[1652]: time="2026-01-24T11:58:20.884080509Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 24 11:58:20.884296 containerd[1652]: time="2026-01-24T11:58:20.884099476Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 24 11:58:20.884296 containerd[1652]: time="2026-01-24T11:58:20.884114918Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 24 11:58:20.884296 containerd[1652]: time="2026-01-24T11:58:20.884225028Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 24 11:58:20.884296 containerd[1652]: time="2026-01-24T11:58:20.884288841Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 24 11:58:20.884984 containerd[1652]: time="2026-01-24T11:58:20.884335128Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 24 11:58:20.884984 containerd[1652]: time="2026-01-24T11:58:20.884378326Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 24 11:58:20.884984 containerd[1652]: time="2026-01-24T11:58:20.884458446Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 24 11:58:20.887612 containerd[1652]: time="2026-01-24T11:58:20.887343763Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 24 11:58:20.887612 containerd[1652]: time="2026-01-24T11:58:20.887406641Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 24 11:58:20.887612 containerd[1652]: time="2026-01-24T11:58:20.887427062Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 24 11:58:20.887612 containerd[1652]: time="2026-01-24T11:58:20.887447859Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 24 11:58:20.887612 containerd[1652]: time="2026-01-24T11:58:20.887460730Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 24 11:58:20.887612 containerd[1652]: time="2026-01-24T11:58:20.887477617Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 24 11:58:20.887964 containerd[1652]: time="2026-01-24T11:58:20.887833924Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 24 11:58:20.888038 containerd[1652]: time="2026-01-24T11:58:20.887934138Z" level=info msg="runtime interface created" Jan 24 11:58:20.888108 containerd[1652]: time="2026-01-24T11:58:20.888089205Z" level=info msg="created NRI interface" Jan 24 11:58:20.888445 containerd[1652]: time="2026-01-24T11:58:20.888168459Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 24 11:58:20.888445 containerd[1652]: time="2026-01-24T11:58:20.888207879Z" level=info msg="Connect containerd service" Jan 24 11:58:20.888445 containerd[1652]: time="2026-01-24T11:58:20.888297558Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 24 11:58:20.891854 containerd[1652]: time="2026-01-24T11:58:20.891829630Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 11:58:22.362625 tar[1635]: linux-amd64/README.md Jan 24 11:58:22.390875 containerd[1652]: time="2026-01-24T11:58:22.390721533Z" level=info msg="Start subscribing containerd event" Jan 24 11:58:22.391348 containerd[1652]: time="2026-01-24T11:58:22.390928207Z" level=info msg="Start recovering state" Jan 24 11:58:22.391720 containerd[1652]: time="2026-01-24T11:58:22.391690352Z" level=info msg="Start event monitor" Jan 24 11:58:22.391946 containerd[1652]: time="2026-01-24T11:58:22.391844377Z" level=info msg="Start cni network conf syncer for default" Jan 24 11:58:22.392278 containerd[1652]: time="2026-01-24T11:58:22.392234169Z" level=info msg="Start streaming server" Jan 24 11:58:22.392385 containerd[1652]: time="2026-01-24T11:58:22.392358899Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 24 11:58:22.392494 containerd[1652]: time="2026-01-24T11:58:22.392474960Z" level=info msg="runtime interface starting up..." Jan 24 11:58:22.392536 containerd[1652]: time="2026-01-24T11:58:22.392516489Z" level=info msg="starting plugins..." Jan 24 11:58:22.393818 containerd[1652]: time="2026-01-24T11:58:22.392683000Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 24 11:58:22.393818 containerd[1652]: time="2026-01-24T11:58:22.393135743Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 24 11:58:22.393818 containerd[1652]: time="2026-01-24T11:58:22.393329181Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 24 11:58:22.399620 containerd[1652]: time="2026-01-24T11:58:22.399081242Z" level=info msg="containerd successfully booted in 1.714428s" Jan 24 11:58:22.399388 systemd[1]: Started containerd.service - containerd container runtime. Jan 24 11:58:22.424500 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 24 11:58:25.852211 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 24 11:58:25.895728 systemd[1]: Started sshd@0-10.0.0.100:22-10.0.0.1:40308.service - OpenSSH per-connection server daemon (10.0.0.1:40308). Jan 24 11:58:26.347373 sshd[1726]: Accepted publickey for core from 10.0.0.1 port 40308 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 11:58:26.359773 sshd-session[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 11:58:26.375882 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 11:58:26.379310 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 24 11:58:26.381416 systemd[1]: Startup finished in 9.129s (kernel) + 18.613s (initrd) + 22.973s (userspace) = 50.716s. Jan 24 11:58:26.388229 (kubelet)[1736]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 11:58:26.487872 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 24 11:58:26.490823 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 24 11:58:26.506043 systemd-logind[1619]: New session 1 of user core. Jan 24 11:58:26.572967 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 24 11:58:26.584502 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 24 11:58:26.627381 (systemd)[1740]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 24 11:58:26.640341 systemd-logind[1619]: New session 2 of user core. Jan 24 11:58:27.332216 systemd[1740]: Queued start job for default target default.target. Jan 24 11:58:27.520016 systemd[1740]: Created slice app.slice - User Application Slice. Jan 24 11:58:27.520147 systemd[1740]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 24 11:58:27.520171 systemd[1740]: Reached target paths.target - Paths. Jan 24 11:58:27.556013 systemd[1740]: Reached target timers.target - Timers. Jan 24 11:58:27.562811 systemd[1740]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 24 11:58:27.569069 systemd[1740]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 24 11:58:27.625488 systemd[1740]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 24 11:58:27.626969 systemd[1740]: Reached target sockets.target - Sockets. Jan 24 11:58:27.634171 systemd[1740]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 24 11:58:27.634346 systemd[1740]: Reached target basic.target - Basic System. Jan 24 11:58:27.634519 systemd[1740]: Reached target default.target - Main User Target. Jan 24 11:58:27.634640 systemd[1740]: Startup finished in 964ms. Jan 24 11:58:27.634993 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 24 11:58:27.648910 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 24 11:58:27.708279 systemd[1]: Started sshd@1-10.0.0.100:22-10.0.0.1:40316.service - OpenSSH per-connection server daemon (10.0.0.1:40316). Jan 24 11:58:28.445701 sshd[1759]: Accepted publickey for core from 10.0.0.1 port 40316 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 11:58:28.453433 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 11:58:28.512468 systemd-logind[1619]: New session 3 of user core. Jan 24 11:58:28.531044 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 24 11:58:28.596474 sshd[1764]: Connection closed by 10.0.0.1 port 40316 Jan 24 11:58:28.594089 sshd-session[1759]: pam_unix(sshd:session): session closed for user core Jan 24 11:58:28.616051 systemd[1]: sshd@1-10.0.0.100:22-10.0.0.1:40316.service: Deactivated successfully. Jan 24 11:58:28.621996 systemd[1]: session-3.scope: Deactivated successfully. Jan 24 11:58:28.627638 systemd-logind[1619]: Session 3 logged out. Waiting for processes to exit. Jan 24 11:58:28.639112 systemd[1]: Started sshd@2-10.0.0.100:22-10.0.0.1:40322.service - OpenSSH per-connection server daemon (10.0.0.1:40322). Jan 24 11:58:28.644371 systemd-logind[1619]: Removed session 3. Jan 24 11:58:29.338340 sshd[1770]: Accepted publickey for core from 10.0.0.1 port 40322 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 11:58:29.341739 sshd-session[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 11:58:29.358487 systemd-logind[1619]: New session 4 of user core. Jan 24 11:58:29.367130 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 24 11:58:29.416870 sshd[1774]: Connection closed by 10.0.0.1 port 40322 Jan 24 11:58:29.417455 sshd-session[1770]: pam_unix(sshd:session): session closed for user core Jan 24 11:58:29.442878 systemd[1]: sshd@2-10.0.0.100:22-10.0.0.1:40322.service: Deactivated successfully. Jan 24 11:58:29.446849 systemd[1]: session-4.scope: Deactivated successfully. Jan 24 11:58:29.451738 systemd-logind[1619]: Session 4 logged out. Waiting for processes to exit. Jan 24 11:58:29.455736 systemd[1]: Started sshd@3-10.0.0.100:22-10.0.0.1:40330.service - OpenSSH per-connection server daemon (10.0.0.1:40330). Jan 24 11:58:29.460974 systemd-logind[1619]: Removed session 4. Jan 24 11:58:29.793813 kubelet[1736]: E0124 11:58:29.793181 1736 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 11:58:29.802362 sshd[1780]: Accepted publickey for core from 10.0.0.1 port 40330 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 11:58:29.808801 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 11:58:29.814428 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 11:58:29.809239 sshd-session[1780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 11:58:29.819422 systemd[1]: kubelet.service: Consumed 7.312s CPU time, 258.8M memory peak. Jan 24 11:58:30.258057 systemd-logind[1619]: New session 5 of user core. Jan 24 11:58:30.266033 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 24 11:58:30.337777 sshd[1785]: Connection closed by 10.0.0.1 port 40330 Jan 24 11:58:30.336652 sshd-session[1780]: pam_unix(sshd:session): session closed for user core Jan 24 11:58:30.353788 systemd[1]: sshd@3-10.0.0.100:22-10.0.0.1:40330.service: Deactivated successfully. Jan 24 11:58:30.358320 systemd[1]: session-5.scope: Deactivated successfully. Jan 24 11:58:30.361286 systemd-logind[1619]: Session 5 logged out. Waiting for processes to exit. Jan 24 11:58:30.368652 systemd[1]: Started sshd@4-10.0.0.100:22-10.0.0.1:40336.service - OpenSSH per-connection server daemon (10.0.0.1:40336). Jan 24 11:58:30.372300 systemd-logind[1619]: Removed session 5. Jan 24 11:58:30.555438 sshd[1791]: Accepted publickey for core from 10.0.0.1 port 40336 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 11:58:30.559135 sshd-session[1791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 11:58:30.572665 systemd-logind[1619]: New session 6 of user core. Jan 24 11:58:30.580199 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 24 11:58:31.043195 sudo[1796]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 24 11:58:31.044014 sudo[1796]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 11:58:31.061893 sudo[1796]: pam_unix(sudo:session): session closed for user root Jan 24 11:58:31.068597 sshd[1795]: Connection closed by 10.0.0.1 port 40336 Jan 24 11:58:31.070852 sshd-session[1791]: pam_unix(sshd:session): session closed for user core Jan 24 11:58:31.097191 systemd[1]: sshd@4-10.0.0.100:22-10.0.0.1:40336.service: Deactivated successfully. Jan 24 11:58:31.101308 systemd[1]: session-6.scope: Deactivated successfully. Jan 24 11:58:31.107401 systemd-logind[1619]: Session 6 logged out. Waiting for processes to exit. Jan 24 11:58:31.115453 systemd[1]: Started sshd@5-10.0.0.100:22-10.0.0.1:40340.service - OpenSSH per-connection server daemon (10.0.0.1:40340). Jan 24 11:58:31.117101 systemd-logind[1619]: Removed session 6. Jan 24 11:58:31.237471 sshd[1803]: Accepted publickey for core from 10.0.0.1 port 40340 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 11:58:31.241203 sshd-session[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 11:58:31.253080 systemd-logind[1619]: New session 7 of user core. Jan 24 11:58:31.271873 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 24 11:58:31.332369 sudo[1809]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 24 11:58:31.334276 sudo[1809]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 11:58:31.342960 sudo[1809]: pam_unix(sudo:session): session closed for user root Jan 24 11:58:31.468960 sudo[1808]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 24 11:58:31.470086 sudo[1808]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 11:58:31.724319 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 24 11:58:31.916000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 24 11:58:31.920006 augenrules[1833]: No rules Jan 24 11:58:31.921277 kernel: kauditd_printk_skb: 40 callbacks suppressed Jan 24 11:58:31.921343 kernel: audit: type=1305 audit(1769255911.916:218): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 24 11:58:31.921930 systemd[1]: audit-rules.service: Deactivated successfully. Jan 24 11:58:31.922766 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 24 11:58:31.916000 audit[1833]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd9ac37030 a2=420 a3=0 items=0 ppid=1814 pid=1833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:31.928242 sudo[1808]: pam_unix(sudo:session): session closed for user root Jan 24 11:58:31.932219 sshd[1807]: Connection closed by 10.0.0.1 port 40340 Jan 24 11:58:31.934076 sshd-session[1803]: pam_unix(sshd:session): session closed for user core Jan 24 11:58:31.941201 kernel: audit: type=1300 audit(1769255911.916:218): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd9ac37030 a2=420 a3=0 items=0 ppid=1814 pid=1833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:31.941272 kernel: audit: type=1327 audit(1769255911.916:218): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 24 11:58:31.916000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 24 11:58:31.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:31.955780 kernel: audit: type=1130 audit(1769255911.925:219): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:31.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:31.956722 kernel: audit: type=1131 audit(1769255911.925:220): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:31.962979 systemd[1]: sshd@5-10.0.0.100:22-10.0.0.1:40340.service: Deactivated successfully. Jan 24 11:58:31.966049 systemd[1]: session-7.scope: Deactivated successfully. Jan 24 11:58:31.927000 audit[1808]: USER_END pid=1808 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 11:58:31.968097 systemd-logind[1619]: Session 7 logged out. Waiting for processes to exit. Jan 24 11:58:31.971735 systemd[1]: Started sshd@6-10.0.0.100:22-10.0.0.1:40352.service - OpenSSH per-connection server daemon (10.0.0.1:40352). Jan 24 11:58:31.975604 kernel: audit: type=1106 audit(1769255911.927:221): pid=1808 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 11:58:31.975666 kernel: audit: type=1104 audit(1769255911.927:222): pid=1808 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 11:58:31.927000 audit[1808]: CRED_DISP pid=1808 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 11:58:31.974186 systemd-logind[1619]: Removed session 7. Jan 24 11:58:32.003070 kernel: audit: type=1106 audit(1769255911.932:223): pid=1803 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 11:58:32.003134 kernel: audit: type=1104 audit(1769255911.932:224): pid=1803 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 11:58:32.003165 kernel: audit: type=1131 audit(1769255911.962:225): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.100:22-10.0.0.1:40340 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:31.932000 audit[1803]: USER_END pid=1803 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 11:58:31.932000 audit[1803]: CRED_DISP pid=1803 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 11:58:31.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.100:22-10.0.0.1:40340 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:31.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.100:22-10.0.0.1:40352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:32.140000 audit[1842]: USER_ACCT pid=1842 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 11:58:32.142481 sshd[1842]: Accepted publickey for core from 10.0.0.1 port 40352 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 11:58:32.145625 sshd-session[1842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 11:58:32.142000 audit[1842]: CRED_ACQ pid=1842 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 11:58:32.143000 audit[1842]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc729c9ac0 a2=3 a3=0 items=0 ppid=1 pid=1842 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:32.143000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 11:58:32.155885 systemd-logind[1619]: New session 8 of user core. Jan 24 11:58:32.184524 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 24 11:58:32.209000 audit[1842]: USER_START pid=1842 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 11:58:32.237000 audit[1846]: CRED_ACQ pid=1846 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 11:58:32.395000 audit[1847]: USER_ACCT pid=1847 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 11:58:32.396000 audit[1847]: CRED_REFR pid=1847 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 11:58:32.396000 audit[1847]: USER_START pid=1847 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 11:58:32.397713 sudo[1847]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 24 11:58:32.398424 sudo[1847]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 11:58:37.887608 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 24 11:58:37.939318 (dockerd)[1869]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 24 11:58:39.884357 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 24 11:58:39.892053 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 11:58:42.532052 dockerd[1869]: time="2026-01-24T11:58:42.531639545Z" level=info msg="Starting up" Jan 24 11:58:42.535995 dockerd[1869]: time="2026-01-24T11:58:42.535927207Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 24 11:58:43.033724 dockerd[1869]: time="2026-01-24T11:58:43.033527009Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 24 11:58:43.273230 dockerd[1869]: time="2026-01-24T11:58:43.271955445Z" level=info msg="Loading containers: start." Jan 24 11:58:43.281811 kernel: kauditd_printk_skb: 11 callbacks suppressed Jan 24 11:58:43.281951 kernel: audit: type=1130 audit(1769255923.276:235): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:43.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:43.277170 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 11:58:43.317846 (kubelet)[1901]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 11:58:43.322205 kernel: Initializing XFRM netlink socket Jan 24 11:58:43.845000 audit[1939]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1939 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 11:58:43.854323 kernel: audit: type=1325 audit(1769255923.845:236): table=nat:2 family=2 entries=2 op=nft_register_chain pid=1939 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 11:58:43.845000 audit[1939]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd4ac3de50 a2=0 a3=0 items=0 ppid=1869 pid=1939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:43.868295 kubelet[1901]: E0124 11:58:43.862110 1901 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 11:58:43.868944 kernel: audit: type=1300 audit(1769255923.845:236): arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd4ac3de50 a2=0 a3=0 items=0 ppid=1869 pid=1939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:43.845000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 24 11:58:43.871303 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 11:58:43.872091 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 11:58:43.874317 systemd[1]: kubelet.service: Consumed 2.629s CPU time, 110.7M memory peak. Jan 24 11:58:43.880444 kernel: audit: type=1327 audit(1769255923.845:236): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 24 11:58:43.880513 kernel: audit: type=1325 audit(1769255923.851:237): table=filter:3 family=2 entries=2 op=nft_register_chain pid=1941 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 11:58:43.851000 audit[1941]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1941 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 11:58:43.851000 audit[1941]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffcbe85fe20 a2=0 a3=0 items=0 ppid=1869 pid=1941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:43.851000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 24 11:58:43.910146 kernel: audit: type=1300 audit(1769255923.851:237): arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffcbe85fe20 a2=0 a3=0 items=0 ppid=1869 pid=1941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:43.910279 kernel: audit: type=1327 audit(1769255923.851:237): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 24 11:58:43.910332 kernel: audit: type=1325 audit(1769255923.857:238): table=filter:4 family=2 entries=1 op=nft_register_chain pid=1943 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 11:58:43.857000 audit[1943]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1943 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 11:58:43.915231 kernel: audit: type=1300 audit(1769255923.857:238): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd8c8a5670 a2=0 a3=0 items=0 ppid=1869 pid=1943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:43.857000 audit[1943]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd8c8a5670 a2=0 a3=0 items=0 ppid=1869 pid=1943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:43.926595 kernel: audit: type=1327 audit(1769255923.857:238): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 24 11:58:43.857000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 24 11:58:43.864000 audit[1945]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1945 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 11:58:43.864000 audit[1945]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc41020030 a2=0 a3=0 items=0 ppid=1869 pid=1945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:43.864000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 24 11:58:43.873000 audit[1947]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1947 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 11:58:43.873000 audit[1947]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd6f0b2110 a2=0 a3=0 items=0 ppid=1869 pid=1947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:43.873000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 24 11:58:43.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 11:58:43.878000 audit[1950]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1950 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 11:58:43.878000 audit[1950]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd24b212b0 a2=0 a3=0 items=0 ppid=1869 pid=1950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:43.878000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 24 11:58:43.884000 audit[1952]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1952 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 11:58:43.884000 audit[1952]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd7482a390 a2=0 a3=0 items=0 ppid=1869 pid=1952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:43.884000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 24 11:58:43.894000 audit[1954]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1954 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 11:58:43.894000 audit[1954]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffea9ae40c0 a2=0 a3=0 items=0 ppid=1869 pid=1954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:43.894000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 24 11:58:44.121000 audit[1957]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1957 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 11:58:44.121000 audit[1957]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffdeca34ae0 a2=0 a3=0 items=0 ppid=1869 pid=1957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:44.121000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 24 11:58:44.128000 audit[1959]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1959 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 11:58:44.128000 audit[1959]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffceca23d70 a2=0 a3=0 items=0 ppid=1869 pid=1959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:44.128000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 24 11:58:44.134000 audit[1961]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1961 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 11:58:44.134000 audit[1961]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffc5bb525e0 a2=0 a3=0 items=0 ppid=1869 pid=1961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:44.134000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 24 11:58:44.139000 audit[1963]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1963 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 11:58:44.139000 audit[1963]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffed371f420 a2=0 a3=0 items=0 ppid=1869 pid=1963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:44.139000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 24 11:58:44.147000 audit[1965]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1965 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 11:58:44.147000 audit[1965]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffe61879f80 a2=0 a3=0 items=0 ppid=1869 pid=1965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:44.147000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 24 11:58:44.266000 audit[1995]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1995 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 11:58:44.266000 audit[1995]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd6d9d91f0 a2=0 a3=0 items=0 ppid=1869 pid=1995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:44.266000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 24 11:58:44.282000 audit[1997]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1997 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 11:58:44.282000 audit[1997]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff60f68aa0 a2=0 a3=0 items=0 ppid=1869 pid=1997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:44.282000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 24 11:58:44.288000 audit[1999]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1999 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 11:58:44.288000 audit[1999]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe0c833270 a2=0 a3=0 items=0 ppid=1869 pid=1999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:44.288000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 24 11:58:44.292000 audit[2001]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2001 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 11:58:44.292000 audit[2001]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce82f3520 a2=0 a3=0 items=0 ppid=1869 pid=2001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:44.292000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 24 11:58:44.304000 audit[2003]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2003 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 11:58:44.304000 audit[2003]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffde6bf3260 a2=0 a3=0 items=0 ppid=1869 pid=2003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:44.304000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 24 11:58:44.313000 audit[2005]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2005 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 11:58:44.313000 audit[2005]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd399e5b50 a2=0 a3=0 items=0 ppid=1869 pid=2005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:44.313000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 24 11:58:44.321000 audit[2007]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2007 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 11:58:44.321000 audit[2007]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc21419f50 a2=0 a3=0 items=0 ppid=1869 pid=2007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:44.321000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 24 11:58:44.333000 audit[2009]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2009 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 11:58:44.333000 audit[2009]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffcb8da8c50 a2=0 a3=0 items=0 ppid=1869 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:44.333000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 24 11:58:44.343000 audit[2011]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2011 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 11:58:44.343000 audit[2011]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7fff2242f630 a2=0 a3=0 items=0 ppid=1869 pid=2011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:44.343000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 24 11:58:44.349000 audit[2013]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2013 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 11:58:44.349000 audit[2013]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd3ecb0430 a2=0 a3=0 items=0 ppid=1869 pid=2013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:44.349000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 24 11:58:44.357000 audit[2015]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2015 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 11:58:44.357000 audit[2015]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffeef2b0ac0 a2=0 a3=0 items=0 ppid=1869 pid=2015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:44.357000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 24 11:58:44.460000 audit[2017]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2017 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 11:58:44.460000 audit[2017]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffede6f6470 a2=0 a3=0 items=0 ppid=1869 pid=2017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:44.460000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 24 11:58:44.506000 audit[2019]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2019 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 11:58:44.506000 audit[2019]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffc1e35a100 a2=0 a3=0 items=0 ppid=1869 pid=2019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:44.506000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 24 11:58:44.535000 audit[2024]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2024 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 11:58:44.535000 audit[2024]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc5cb0ae80 a2=0 a3=0 items=0 ppid=1869 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:44.535000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 24 11:58:44.548000 audit[2026]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2026 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 11:58:44.548000 audit[2026]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffd818533e0 a2=0 a3=0 items=0 ppid=1869 pid=2026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:44.548000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 24 11:58:44.583000 audit[2028]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2028 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 11:58:44.583000 audit[2028]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc636c7550 a2=0 a3=0 items=0 ppid=1869 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:44.583000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 24 11:58:44.777000 audit[2030]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2030 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 11:58:44.777000 audit[2030]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff0c0aa130 a2=0 a3=0 items=0 ppid=1869 pid=2030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:44.777000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 24 11:58:44.851000 audit[2032]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2032 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 11:58:44.851000 audit[2032]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffc62941a50 a2=0 a3=0 items=0 ppid=1869 pid=2032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:44.851000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 24 11:58:44.868000 audit[2034]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2034 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 11:58:44.868000 audit[2034]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe61f235d0 a2=0 a3=0 items=0 ppid=1869 pid=2034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:44.868000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 24 11:58:44.960000 audit[2038]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2038 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 11:58:44.960000 audit[2038]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7fffcb884d80 a2=0 a3=0 items=0 ppid=1869 pid=2038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:44.960000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 24 11:58:44.967000 audit[2040]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2040 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 11:58:44.967000 audit[2040]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffcb6123b70 a2=0 a3=0 items=0 ppid=1869 pid=2040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:44.967000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 24 11:58:44.993000 audit[2048]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2048 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 11:58:44.993000 audit[2048]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffdd8994e70 a2=0 a3=0 items=0 ppid=1869 pid=2048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:44.993000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 24 11:58:45.040000 audit[2054]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2054 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 11:58:45.040000 audit[2054]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff8cad9210 a2=0 a3=0 items=0 ppid=1869 pid=2054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:45.040000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 24 11:58:45.049000 audit[2056]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2056 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 11:58:45.049000 audit[2056]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffed9692d80 a2=0 a3=0 items=0 ppid=1869 pid=2056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:45.049000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 24 11:58:45.054000 audit[2058]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2058 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 11:58:45.054000 audit[2058]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff33a56f20 a2=0 a3=0 items=0 ppid=1869 pid=2058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:45.054000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 24 11:58:45.065000 audit[2060]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2060 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 11:58:45.065000 audit[2060]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffdbf6890b0 a2=0 a3=0 items=0 ppid=1869 pid=2060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:45.065000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 24 11:58:45.071000 audit[2062]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2062 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 11:58:45.071000 audit[2062]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc1a2238b0 a2=0 a3=0 items=0 ppid=1869 pid=2062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:58:45.071000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 24 11:58:45.073483 systemd-networkd[1510]: docker0: Link UP Jan 24 11:58:45.126474 dockerd[1869]: time="2026-01-24T11:58:45.126066701Z" level=info msg="Loading containers: done." Jan 24 11:58:45.412484 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1004149836-merged.mount: Deactivated successfully. Jan 24 11:58:45.438352 dockerd[1869]: time="2026-01-24T11:58:45.437136730Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 24 11:58:45.438352 dockerd[1869]: time="2026-01-24T11:58:45.437929495Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 24 11:58:45.438352 dockerd[1869]: time="2026-01-24T11:58:45.438204115Z" level=info msg="Initializing buildkit" Jan 24 11:58:45.867289 dockerd[1869]: time="2026-01-24T11:58:45.864786460Z" level=info msg="Completed buildkit initialization" Jan 24 11:58:45.923265 dockerd[1869]: time="2026-01-24T11:58:45.922650860Z" level=info msg="Daemon has completed initialization" Jan 24 11:58:45.923861 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 24 11:58:45.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:45.931755 dockerd[1869]: time="2026-01-24T11:58:45.931446814Z" level=info msg="API listen on /run/docker.sock" Jan 24 11:58:49.568146 containerd[1652]: time="2026-01-24T11:58:49.567187190Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 24 11:58:51.870811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount731827287.mount: Deactivated successfully. Jan 24 11:58:53.878993 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 24 11:58:53.887433 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 11:58:54.314965 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 11:58:54.346649 kernel: kauditd_printk_skb: 113 callbacks suppressed Jan 24 11:58:54.346793 kernel: audit: type=1130 audit(1769255934.337:278): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:54.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:58:54.398216 (kubelet)[2176]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 11:58:54.589911 kubelet[2176]: E0124 11:58:54.589438 2176 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 11:58:54.597402 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 11:58:54.597742 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 11:58:54.598730 systemd[1]: kubelet.service: Consumed 435ms CPU time, 109.9M memory peak. Jan 24 11:58:54.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 11:58:54.609682 kernel: audit: type=1131 audit(1769255934.596:279): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 11:58:55.208119 containerd[1652]: time="2026-01-24T11:58:55.207987806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 11:58:55.213795 containerd[1652]: time="2026-01-24T11:58:55.213458179Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27011539" Jan 24 11:58:55.216526 containerd[1652]: time="2026-01-24T11:58:55.216264357Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 11:58:55.223217 containerd[1652]: time="2026-01-24T11:58:55.223074549Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 11:58:55.228152 containerd[1652]: time="2026-01-24T11:58:55.228061117Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 5.660687103s" Jan 24 11:58:55.228244 containerd[1652]: time="2026-01-24T11:58:55.228182304Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 24 11:58:55.231493 containerd[1652]: time="2026-01-24T11:58:55.231323884Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 24 11:58:57.015466 containerd[1652]: time="2026-01-24T11:58:57.015288335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 11:58:57.017857 containerd[1652]: time="2026-01-24T11:58:57.017632222Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21154285" Jan 24 11:58:57.020611 containerd[1652]: time="2026-01-24T11:58:57.020373043Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 11:58:57.028130 containerd[1652]: time="2026-01-24T11:58:57.027958948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 11:58:57.029891 containerd[1652]: time="2026-01-24T11:58:57.029782824Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 1.798390354s" Jan 24 11:58:57.029891 containerd[1652]: time="2026-01-24T11:58:57.029849284Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 24 11:58:57.031059 containerd[1652]: time="2026-01-24T11:58:57.030706532Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 24 11:58:58.973624 containerd[1652]: time="2026-01-24T11:58:58.962364102Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 11:58:58.986337 containerd[1652]: time="2026-01-24T11:58:58.975689496Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15717792" Jan 24 11:58:58.986337 containerd[1652]: time="2026-01-24T11:58:58.985906414Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 11:58:59.028410 containerd[1652]: time="2026-01-24T11:58:59.027439201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 11:58:59.028410 containerd[1652]: time="2026-01-24T11:58:59.028374317Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 1.997563998s" Jan 24 11:58:59.028410 containerd[1652]: time="2026-01-24T11:58:59.028489676Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 24 11:58:59.033297 containerd[1652]: time="2026-01-24T11:58:59.033193760Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 24 11:59:02.438941 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1254286765.mount: Deactivated successfully. Jan 24 11:59:02.805313 containerd[1652]: time="2026-01-24T11:59:02.804957528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 11:59:02.807091 containerd[1652]: time="2026-01-24T11:59:02.806836337Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25962311" Jan 24 11:59:02.812273 containerd[1652]: time="2026-01-24T11:59:02.812115704Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 11:59:02.817811 containerd[1652]: time="2026-01-24T11:59:02.817658412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 11:59:02.819086 containerd[1652]: time="2026-01-24T11:59:02.818775764Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 3.785507069s" Jan 24 11:59:02.819086 containerd[1652]: time="2026-01-24T11:59:02.818867522Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 24 11:59:02.822310 containerd[1652]: time="2026-01-24T11:59:02.822073561Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 24 11:59:03.867621 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3559110812.mount: Deactivated successfully. Jan 24 11:59:04.643788 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 24 11:59:04.662015 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 11:59:04.808044 update_engine[1620]: I20260124 11:59:04.805114 1620 update_attempter.cc:509] Updating boot flags... Jan 24 11:59:05.198007 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 11:59:05.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:59:05.219644 kernel: audit: type=1130 audit(1769255945.196:280): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:59:05.222257 (kubelet)[2248]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 11:59:05.624722 kubelet[2248]: E0124 11:59:05.624511 2248 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 11:59:05.631628 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 11:59:05.632376 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 11:59:05.631000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 11:59:05.633607 systemd[1]: kubelet.service: Consumed 724ms CPU time, 110.4M memory peak. Jan 24 11:59:05.649770 kernel: audit: type=1131 audit(1769255945.631:281): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 11:59:06.843059 containerd[1652]: time="2026-01-24T11:59:06.842625268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 11:59:06.845338 containerd[1652]: time="2026-01-24T11:59:06.845264972Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=21782385" Jan 24 11:59:06.848512 containerd[1652]: time="2026-01-24T11:59:06.848388221Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 11:59:06.854432 containerd[1652]: time="2026-01-24T11:59:06.853642478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 11:59:06.855349 containerd[1652]: time="2026-01-24T11:59:06.855219910Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 4.033107629s" Jan 24 11:59:06.855349 containerd[1652]: time="2026-01-24T11:59:06.855275882Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 24 11:59:06.859659 containerd[1652]: time="2026-01-24T11:59:06.859524045Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 24 11:59:07.333444 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount641308533.mount: Deactivated successfully. Jan 24 11:59:07.349600 containerd[1652]: time="2026-01-24T11:59:07.349395433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 11:59:07.352700 containerd[1652]: time="2026-01-24T11:59:07.352380604Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Jan 24 11:59:07.354300 containerd[1652]: time="2026-01-24T11:59:07.353196206Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 11:59:07.359098 containerd[1652]: time="2026-01-24T11:59:07.358945485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 11:59:07.360681 containerd[1652]: time="2026-01-24T11:59:07.359933565Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 500.280248ms" Jan 24 11:59:07.360681 containerd[1652]: time="2026-01-24T11:59:07.359982272Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 24 11:59:07.361719 containerd[1652]: time="2026-01-24T11:59:07.361646868Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 24 11:59:07.896106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1714597387.mount: Deactivated successfully. Jan 24 11:59:15.877307 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 24 11:59:15.886696 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 11:59:15.938117 containerd[1652]: time="2026-01-24T11:59:15.936234627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 11:59:15.938117 containerd[1652]: time="2026-01-24T11:59:15.938083362Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=61186606" Jan 24 11:59:15.940948 containerd[1652]: time="2026-01-24T11:59:15.940891492Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 11:59:15.949688 containerd[1652]: time="2026-01-24T11:59:15.949640525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 11:59:15.951523 containerd[1652]: time="2026-01-24T11:59:15.951431649Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 8.589717948s" Jan 24 11:59:15.952049 containerd[1652]: time="2026-01-24T11:59:15.951720689Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 24 11:59:16.979007 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 11:59:16.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:59:16.991646 kernel: audit: type=1130 audit(1769255956.979:282): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:59:17.000407 (kubelet)[2364]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 11:59:17.148342 kubelet[2364]: E0124 11:59:17.148253 2364 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 11:59:17.155407 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 11:59:17.155774 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 11:59:17.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 11:59:17.156650 systemd[1]: kubelet.service: Consumed 818ms CPU time, 110.8M memory peak. Jan 24 11:59:17.169595 kernel: audit: type=1131 audit(1769255957.156:283): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 11:59:22.663267 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 11:59:22.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:59:22.663702 systemd[1]: kubelet.service: Consumed 818ms CPU time, 110.8M memory peak. Jan 24 11:59:22.667778 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 11:59:22.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:59:22.683146 kernel: audit: type=1130 audit(1769255962.662:284): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:59:22.683231 kernel: audit: type=1131 audit(1769255962.662:285): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:59:22.745023 systemd[1]: Reload requested from client PID 2393 ('systemctl') (unit session-8.scope)... Jan 24 11:59:22.745075 systemd[1]: Reloading... Jan 24 11:59:22.944631 zram_generator::config[2442]: No configuration found. Jan 24 11:59:23.667099 systemd[1]: Reloading finished in 921 ms. Jan 24 11:59:23.698000 audit: BPF prog-id=61 op=LOAD Jan 24 11:59:23.705246 kernel: audit: type=1334 audit(1769255963.698:286): prog-id=61 op=LOAD Jan 24 11:59:23.698000 audit: BPF prog-id=47 op=UNLOAD Jan 24 11:59:23.725659 kernel: audit: type=1334 audit(1769255963.698:287): prog-id=47 op=UNLOAD Jan 24 11:59:23.698000 audit: BPF prog-id=62 op=LOAD Jan 24 11:59:23.698000 audit: BPF prog-id=63 op=LOAD Jan 24 11:59:23.730649 kernel: audit: type=1334 audit(1769255963.698:288): prog-id=62 op=LOAD Jan 24 11:59:23.739282 kernel: audit: type=1334 audit(1769255963.698:289): prog-id=63 op=LOAD Jan 24 11:59:23.746116 kernel: audit: type=1334 audit(1769255963.698:290): prog-id=48 op=UNLOAD Jan 24 11:59:23.746299 kernel: audit: type=1334 audit(1769255963.698:291): prog-id=49 op=UNLOAD Jan 24 11:59:23.698000 audit: BPF prog-id=48 op=UNLOAD Jan 24 11:59:23.698000 audit: BPF prog-id=49 op=UNLOAD Jan 24 11:59:23.698000 audit: BPF prog-id=64 op=LOAD Jan 24 11:59:23.762758 kernel: audit: type=1334 audit(1769255963.698:292): prog-id=64 op=LOAD Jan 24 11:59:23.762825 kernel: audit: type=1334 audit(1769255963.698:293): prog-id=65 op=LOAD Jan 24 11:59:23.698000 audit: BPF prog-id=65 op=LOAD Jan 24 11:59:23.698000 audit: BPF prog-id=54 op=UNLOAD Jan 24 11:59:23.698000 audit: BPF prog-id=55 op=UNLOAD Jan 24 11:59:23.702000 audit: BPF prog-id=66 op=LOAD Jan 24 11:59:23.702000 audit: BPF prog-id=51 op=UNLOAD Jan 24 11:59:23.702000 audit: BPF prog-id=67 op=LOAD Jan 24 11:59:23.702000 audit: BPF prog-id=68 op=LOAD Jan 24 11:59:23.702000 audit: BPF prog-id=52 op=UNLOAD Jan 24 11:59:23.702000 audit: BPF prog-id=53 op=UNLOAD Jan 24 11:59:23.706000 audit: BPF prog-id=69 op=LOAD Jan 24 11:59:23.706000 audit: BPF prog-id=50 op=UNLOAD Jan 24 11:59:23.706000 audit: BPF prog-id=70 op=LOAD Jan 24 11:59:23.706000 audit: BPF prog-id=44 op=UNLOAD Jan 24 11:59:23.706000 audit: BPF prog-id=71 op=LOAD Jan 24 11:59:23.706000 audit: BPF prog-id=72 op=LOAD Jan 24 11:59:23.706000 audit: BPF prog-id=45 op=UNLOAD Jan 24 11:59:23.706000 audit: BPF prog-id=46 op=UNLOAD Jan 24 11:59:23.717000 audit: BPF prog-id=73 op=LOAD Jan 24 11:59:23.717000 audit: BPF prog-id=56 op=UNLOAD Jan 24 11:59:23.729000 audit: BPF prog-id=74 op=LOAD Jan 24 11:59:23.750000 audit: BPF prog-id=41 op=UNLOAD Jan 24 11:59:23.750000 audit: BPF prog-id=75 op=LOAD Jan 24 11:59:23.750000 audit: BPF prog-id=76 op=LOAD Jan 24 11:59:23.750000 audit: BPF prog-id=42 op=UNLOAD Jan 24 11:59:23.750000 audit: BPF prog-id=43 op=UNLOAD Jan 24 11:59:23.761000 audit: BPF prog-id=77 op=LOAD Jan 24 11:59:23.761000 audit: BPF prog-id=58 op=UNLOAD Jan 24 11:59:23.761000 audit: BPF prog-id=78 op=LOAD Jan 24 11:59:23.761000 audit: BPF prog-id=79 op=LOAD Jan 24 11:59:23.761000 audit: BPF prog-id=59 op=UNLOAD Jan 24 11:59:23.761000 audit: BPF prog-id=60 op=UNLOAD Jan 24 11:59:23.780000 audit: BPF prog-id=80 op=LOAD Jan 24 11:59:23.780000 audit: BPF prog-id=57 op=UNLOAD Jan 24 11:59:23.894743 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 11:59:23.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:59:23.903093 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 11:59:23.907489 systemd[1]: kubelet.service: Deactivated successfully. Jan 24 11:59:23.909968 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 11:59:23.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:59:23.910241 systemd[1]: kubelet.service: Consumed 293ms CPU time, 98.6M memory peak. Jan 24 11:59:23.916770 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 11:59:24.664270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 11:59:24.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:59:24.677257 (kubelet)[2488]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 11:59:25.598396 kubelet[2488]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 11:59:25.598396 kubelet[2488]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 11:59:25.598396 kubelet[2488]: I0124 11:59:25.598505 2488 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 11:59:28.293400 kubelet[2488]: I0124 11:59:28.290246 2488 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 24 11:59:28.301334 kubelet[2488]: I0124 11:59:28.296026 2488 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 11:59:28.301334 kubelet[2488]: I0124 11:59:28.296833 2488 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 24 11:59:28.301334 kubelet[2488]: I0124 11:59:28.296885 2488 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 11:59:28.301334 kubelet[2488]: I0124 11:59:28.299484 2488 server.go:956] "Client rotation is on, will bootstrap in background" Jan 24 11:59:28.370524 kubelet[2488]: I0124 11:59:28.367078 2488 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 11:59:28.370524 kubelet[2488]: E0124 11:59:28.370414 2488 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.100:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 24 11:59:28.398960 kubelet[2488]: I0124 11:59:28.398795 2488 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 24 11:59:28.457340 kubelet[2488]: I0124 11:59:28.456806 2488 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 24 11:59:28.469474 kubelet[2488]: I0124 11:59:28.467080 2488 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 11:59:28.469474 kubelet[2488]: I0124 11:59:28.468319 2488 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 11:59:28.469474 kubelet[2488]: I0124 11:59:28.469013 2488 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 11:59:28.469474 kubelet[2488]: I0124 11:59:28.469033 2488 container_manager_linux.go:306] "Creating device plugin manager" Jan 24 11:59:28.470165 kubelet[2488]: I0124 11:59:28.469327 2488 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 24 11:59:28.481406 kubelet[2488]: I0124 11:59:28.480590 2488 state_mem.go:36] "Initialized new in-memory state store" Jan 24 11:59:28.482933 kubelet[2488]: I0124 11:59:28.482825 2488 kubelet.go:475] "Attempting to sync node with API server" Jan 24 11:59:28.482933 kubelet[2488]: I0124 11:59:28.482900 2488 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 11:59:28.483054 kubelet[2488]: I0124 11:59:28.483035 2488 kubelet.go:387] "Adding apiserver pod source" Jan 24 11:59:28.483167 kubelet[2488]: I0124 11:59:28.483115 2488 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 11:59:28.498609 kubelet[2488]: E0124 11:59:28.497971 2488 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 24 11:59:28.498609 kubelet[2488]: E0124 11:59:28.498210 2488 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 24 11:59:28.498609 kubelet[2488]: I0124 11:59:28.498347 2488 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 24 11:59:28.499659 kubelet[2488]: I0124 11:59:28.499634 2488 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 24 11:59:28.499819 kubelet[2488]: I0124 11:59:28.499801 2488 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 24 11:59:28.500146 kubelet[2488]: W0124 11:59:28.500125 2488 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 24 11:59:28.550122 kubelet[2488]: I0124 11:59:28.549869 2488 server.go:1262] "Started kubelet" Jan 24 11:59:28.570659 kubelet[2488]: I0124 11:59:28.553746 2488 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 11:59:28.570900 kubelet[2488]: I0124 11:59:28.553890 2488 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 11:59:28.571226 kubelet[2488]: I0124 11:59:28.571193 2488 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 24 11:59:28.572511 kubelet[2488]: I0124 11:59:28.571332 2488 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 11:59:28.572923 kubelet[2488]: I0124 11:59:28.572901 2488 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 11:59:28.573178 kubelet[2488]: I0124 11:59:28.570942 2488 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 11:59:28.573839 kubelet[2488]: E0124 11:59:28.568165 2488 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.100:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.100:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188da8edf2251b6f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-24 11:59:28.549706607 +0000 UTC m=+3.118252155,LastTimestamp:2026-01-24 11:59:28.549706607 +0000 UTC m=+3.118252155,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 24 11:59:28.584795 kubelet[2488]: I0124 11:59:28.584367 2488 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 24 11:59:28.590207 kubelet[2488]: I0124 11:59:28.586783 2488 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 24 11:59:28.593608 kubelet[2488]: E0124 11:59:28.591165 2488 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 11:59:28.593608 kubelet[2488]: E0124 11:59:28.591663 2488 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 24 11:59:28.593608 kubelet[2488]: I0124 11:59:28.592611 2488 server.go:310] "Adding debug handlers to kubelet server" Jan 24 11:59:28.593608 kubelet[2488]: I0124 11:59:28.592994 2488 reconciler.go:29] "Reconciler: start to sync state" Jan 24 11:59:28.593608 kubelet[2488]: E0124 11:59:28.593176 2488 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 11:59:28.593857 kubelet[2488]: E0124 11:59:28.593718 2488 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.100:6443: connect: connection refused" interval="200ms" Jan 24 11:59:28.596681 kubelet[2488]: I0124 11:59:28.596617 2488 factory.go:223] Registration of the systemd container factory successfully Jan 24 11:59:28.597957 kubelet[2488]: I0124 11:59:28.596805 2488 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 11:59:28.601290 kubelet[2488]: I0124 11:59:28.599933 2488 factory.go:223] Registration of the containerd container factory successfully Jan 24 11:59:28.652000 audit[2509]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2509 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 11:59:28.660278 kernel: kauditd_printk_skb: 35 callbacks suppressed Jan 24 11:59:28.660525 kernel: audit: type=1325 audit(1769255968.652:329): table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2509 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 11:59:28.652000 audit[2509]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffec5342080 a2=0 a3=0 items=0 ppid=2488 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:28.652000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 24 11:59:28.693121 kernel: audit: type=1300 audit(1769255968.652:329): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffec5342080 a2=0 a3=0 items=0 ppid=2488 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:28.697303 kernel: audit: type=1327 audit(1769255968.652:329): proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 24 11:59:28.700753 kernel: audit: type=1325 audit(1769255968.656:330): table=filter:43 family=2 entries=1 op=nft_register_chain pid=2510 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 11:59:28.656000 audit[2510]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2510 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 11:59:28.702531 kubelet[2488]: E0124 11:59:28.699537 2488 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 11:59:28.756453 kernel: audit: type=1300 audit(1769255968.656:330): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd458ac660 a2=0 a3=0 items=0 ppid=2488 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:28.761213 kernel: audit: type=1327 audit(1769255968.656:330): proctitle=69707461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 24 11:59:28.761320 kernel: audit: type=1325 audit(1769255968.663:331): table=filter:44 family=2 entries=2 op=nft_register_chain pid=2512 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 11:59:28.656000 audit[2510]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd458ac660 a2=0 a3=0 items=0 ppid=2488 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:28.763621 kernel: audit: type=1300 audit(1769255968.663:331): arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff2d1000b0 a2=0 a3=0 items=0 ppid=2488 pid=2512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:28.656000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 24 11:59:28.663000 audit[2512]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2512 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 11:59:28.663000 audit[2512]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff2d1000b0 a2=0 a3=0 items=0 ppid=2488 pid=2512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:28.799122 kernel: audit: type=1327 audit(1769255968.663:331): proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 24 11:59:28.803126 kernel: audit: type=1325 audit(1769255968.670:332): table=filter:45 family=2 entries=2 op=nft_register_chain pid=2514 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 11:59:28.663000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 24 11:59:28.670000 audit[2514]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2514 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 11:59:28.670000 audit[2514]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffdb483f480 a2=0 a3=0 items=0 ppid=2488 pid=2514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:28.670000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 24 11:59:28.846727 kubelet[2488]: E0124 11:59:28.846523 2488 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.100:6443: connect: connection refused" interval="400ms" Jan 24 11:59:28.847043 kubelet[2488]: E0124 11:59:28.846855 2488 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 11:59:28.853000 audit[2521]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2521 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 11:59:28.853000 audit[2521]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff3449c7d0 a2=0 a3=0 items=0 ppid=2488 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:28.853000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F380000002D2D737263003132372E Jan 24 11:59:28.854946 kubelet[2488]: I0124 11:59:28.854799 2488 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 24 11:59:28.855714 kubelet[2488]: I0124 11:59:28.855276 2488 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 11:59:28.855714 kubelet[2488]: I0124 11:59:28.855306 2488 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 11:59:28.855714 kubelet[2488]: I0124 11:59:28.855510 2488 state_mem.go:36] "Initialized new in-memory state store" Jan 24 11:59:28.857000 audit[2523]: NETFILTER_CFG table=mangle:47 family=2 entries=1 op=nft_register_chain pid=2523 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 11:59:28.857000 audit[2523]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe612af8b0 a2=0 a3=0 items=0 ppid=2488 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:28.857000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 24 11:59:28.858000 audit[2522]: NETFILTER_CFG table=mangle:48 family=10 entries=2 op=nft_register_chain pid=2522 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 11:59:28.858000 audit[2522]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffec1a4ab60 a2=0 a3=0 items=0 ppid=2488 pid=2522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:28.858000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 24 11:59:28.861471 kubelet[2488]: I0124 11:59:28.859814 2488 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 24 11:59:28.861471 kubelet[2488]: I0124 11:59:28.859971 2488 policy_none.go:49] "None policy: Start" Jan 24 11:59:28.861471 kubelet[2488]: I0124 11:59:28.860043 2488 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 24 11:59:28.861471 kubelet[2488]: I0124 11:59:28.859974 2488 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 24 11:59:28.861471 kubelet[2488]: I0124 11:59:28.860111 2488 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 24 11:59:28.861471 kubelet[2488]: I0124 11:59:28.860183 2488 kubelet.go:2427] "Starting kubelet main sync loop" Jan 24 11:59:28.861471 kubelet[2488]: E0124 11:59:28.860337 2488 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 11:59:28.861951 kubelet[2488]: E0124 11:59:28.861860 2488 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 24 11:59:28.862000 audit[2524]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2524 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 11:59:28.862000 audit[2524]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc8f293e00 a2=0 a3=0 items=0 ppid=2488 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:28.862000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 24 11:59:28.864073 kubelet[2488]: I0124 11:59:28.863772 2488 policy_none.go:47] "Start" Jan 24 11:59:28.864000 audit[2525]: NETFILTER_CFG table=mangle:50 family=10 entries=1 op=nft_register_chain pid=2525 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 11:59:28.864000 audit[2525]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc4ce5e960 a2=0 a3=0 items=0 ppid=2488 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:28.864000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 24 11:59:28.865000 audit[2526]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2526 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 11:59:28.865000 audit[2526]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffcc561d00 a2=0 a3=0 items=0 ppid=2488 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:28.865000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 24 11:59:28.868000 audit[2527]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2527 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 11:59:28.868000 audit[2527]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffefdc2da0 a2=0 a3=0 items=0 ppid=2488 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:28.868000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 24 11:59:28.873000 audit[2528]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2528 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 11:59:28.873000 audit[2528]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd9b74a650 a2=0 a3=0 items=0 ppid=2488 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:28.873000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 24 11:59:28.879220 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 24 11:59:28.898991 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 24 11:59:28.909387 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 24 11:59:28.940159 kubelet[2488]: E0124 11:59:28.940093 2488 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 24 11:59:28.940619 kubelet[2488]: I0124 11:59:28.940500 2488 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 11:59:28.940619 kubelet[2488]: I0124 11:59:28.940539 2488 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 11:59:28.941243 kubelet[2488]: I0124 11:59:28.941092 2488 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 11:59:28.943637 kubelet[2488]: E0124 11:59:28.942996 2488 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 11:59:28.943637 kubelet[2488]: E0124 11:59:28.943082 2488 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 24 11:59:28.992650 systemd[1]: Created slice kubepods-burstable-podcf37f17bfb3fd93cd87e89acdc8f69e5.slice - libcontainer container kubepods-burstable-podcf37f17bfb3fd93cd87e89acdc8f69e5.slice. Jan 24 11:59:29.035952 kubelet[2488]: E0124 11:59:29.035846 2488 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 11:59:29.038967 kubelet[2488]: I0124 11:59:29.038869 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 11:59:29.038967 kubelet[2488]: I0124 11:59:29.038931 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cf37f17bfb3fd93cd87e89acdc8f69e5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cf37f17bfb3fd93cd87e89acdc8f69e5\") " pod="kube-system/kube-apiserver-localhost" Jan 24 11:59:29.038967 kubelet[2488]: I0124 11:59:29.038967 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 11:59:29.039146 kubelet[2488]: I0124 11:59:29.038990 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 11:59:29.039146 kubelet[2488]: I0124 11:59:29.039013 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 11:59:29.039146 kubelet[2488]: I0124 11:59:29.039033 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 24 11:59:29.039146 kubelet[2488]: I0124 11:59:29.039050 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cf37f17bfb3fd93cd87e89acdc8f69e5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cf37f17bfb3fd93cd87e89acdc8f69e5\") " pod="kube-system/kube-apiserver-localhost" Jan 24 11:59:29.039146 kubelet[2488]: I0124 11:59:29.039073 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cf37f17bfb3fd93cd87e89acdc8f69e5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cf37f17bfb3fd93cd87e89acdc8f69e5\") " pod="kube-system/kube-apiserver-localhost" Jan 24 11:59:29.039335 kubelet[2488]: I0124 11:59:29.039094 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 11:59:29.042794 systemd[1]: Created slice kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice - libcontainer container kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice. Jan 24 11:59:29.043652 kubelet[2488]: I0124 11:59:29.042803 2488 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 11:59:29.044167 kubelet[2488]: E0124 11:59:29.044100 2488 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.100:6443/api/v1/nodes\": dial tcp 10.0.0.100:6443: connect: connection refused" node="localhost" Jan 24 11:59:29.067285 kubelet[2488]: E0124 11:59:29.067109 2488 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 11:59:29.076608 systemd[1]: Created slice kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice - libcontainer container kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice. Jan 24 11:59:29.084492 kubelet[2488]: E0124 11:59:29.084254 2488 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 11:59:29.246619 kubelet[2488]: I0124 11:59:29.246374 2488 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 11:59:29.247538 kubelet[2488]: E0124 11:59:29.246913 2488 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.100:6443/api/v1/nodes\": dial tcp 10.0.0.100:6443: connect: connection refused" node="localhost" Jan 24 11:59:29.247538 kubelet[2488]: E0124 11:59:29.247403 2488 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.100:6443: connect: connection refused" interval="800ms" Jan 24 11:59:29.378086 kubelet[2488]: E0124 11:59:29.377774 2488 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 11:59:29.381037 containerd[1652]: time="2026-01-24T11:59:29.380107613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cf37f17bfb3fd93cd87e89acdc8f69e5,Namespace:kube-system,Attempt:0,}" Jan 24 11:59:29.390428 kubelet[2488]: E0124 11:59:29.390376 2488 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 11:59:29.393101 containerd[1652]: time="2026-01-24T11:59:29.393037222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,}" Jan 24 11:59:29.394418 kubelet[2488]: E0124 11:59:29.394303 2488 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 11:59:29.395202 containerd[1652]: time="2026-01-24T11:59:29.395139013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,}" Jan 24 11:59:29.511283 kubelet[2488]: E0124 11:59:29.509725 2488 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 24 11:59:29.658775 kubelet[2488]: I0124 11:59:29.657304 2488 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 11:59:29.658775 kubelet[2488]: E0124 11:59:29.658522 2488 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.100:6443/api/v1/nodes\": dial tcp 10.0.0.100:6443: connect: connection refused" node="localhost" Jan 24 11:59:29.774432 kubelet[2488]: E0124 11:59:29.774161 2488 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 24 11:59:29.829520 kubelet[2488]: E0124 11:59:29.829084 2488 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 24 11:59:29.937178 kubelet[2488]: E0124 11:59:29.937091 2488 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 24 11:59:30.050081 kubelet[2488]: E0124 11:59:30.049152 2488 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.100:6443: connect: connection refused" interval="1.6s" Jan 24 11:59:30.226441 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1455495258.mount: Deactivated successfully. Jan 24 11:59:30.247178 containerd[1652]: time="2026-01-24T11:59:30.246943645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 11:59:30.252317 containerd[1652]: time="2026-01-24T11:59:30.252015371Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=316581" Jan 24 11:59:30.266986 containerd[1652]: time="2026-01-24T11:59:30.266041211Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 11:59:30.273085 containerd[1652]: time="2026-01-24T11:59:30.272647739Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 11:59:30.276672 containerd[1652]: time="2026-01-24T11:59:30.276300921Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 11:59:30.280038 containerd[1652]: time="2026-01-24T11:59:30.279881531Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 24 11:59:30.281835 containerd[1652]: time="2026-01-24T11:59:30.281498835Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 24 11:59:30.287848 containerd[1652]: time="2026-01-24T11:59:30.287630203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 11:59:30.290463 containerd[1652]: time="2026-01-24T11:59:30.288887808Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 886.08427ms" Jan 24 11:59:30.292074 containerd[1652]: time="2026-01-24T11:59:30.291711033Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 890.753942ms" Jan 24 11:59:30.292704 containerd[1652]: time="2026-01-24T11:59:30.292636625Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 898.277492ms" Jan 24 11:59:30.399502 containerd[1652]: time="2026-01-24T11:59:30.399453877Z" level=info msg="connecting to shim 38049098287d307a7a18eabf3a814f9cdacd8d922167e511e052355178b4e493" address="unix:///run/containerd/s/f0f70f45ffce4d9b1f3f9fab3938851c51036300b0ab8be40c369a1fdd23299c" namespace=k8s.io protocol=ttrpc version=3 Jan 24 11:59:30.405306 containerd[1652]: time="2026-01-24T11:59:30.405168748Z" level=info msg="connecting to shim ecf332f39bed1f3beb4cca9aea6c9b946ba9fdc568766fdfa96d44e654d8ccac" address="unix:///run/containerd/s/c1d1a57834b905fa1c186be3117843b30911256b26b5d928de3f096b7d051063" namespace=k8s.io protocol=ttrpc version=3 Jan 24 11:59:30.433990 containerd[1652]: time="2026-01-24T11:59:30.433494096Z" level=info msg="connecting to shim 652fffa2d414f160be3ef5236c25e4f7f4b39384ceb067561558c9086da14b18" address="unix:///run/containerd/s/2c4fb6312d02ba59b503f6bd14298c43738f61851418152903589004bd146307" namespace=k8s.io protocol=ttrpc version=3 Jan 24 11:59:30.545013 kubelet[2488]: E0124 11:59:30.544664 2488 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.100:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 24 11:59:30.550838 kubelet[2488]: I0124 11:59:30.548024 2488 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 11:59:30.550838 kubelet[2488]: E0124 11:59:30.549502 2488 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.100:6443/api/v1/nodes\": dial tcp 10.0.0.100:6443: connect: connection refused" node="localhost" Jan 24 11:59:30.957968 systemd[1]: Started cri-containerd-38049098287d307a7a18eabf3a814f9cdacd8d922167e511e052355178b4e493.scope - libcontainer container 38049098287d307a7a18eabf3a814f9cdacd8d922167e511e052355178b4e493. Jan 24 11:59:30.961739 systemd[1]: Started cri-containerd-ecf332f39bed1f3beb4cca9aea6c9b946ba9fdc568766fdfa96d44e654d8ccac.scope - libcontainer container ecf332f39bed1f3beb4cca9aea6c9b946ba9fdc568766fdfa96d44e654d8ccac. Jan 24 11:59:31.008082 systemd[1]: Started cri-containerd-652fffa2d414f160be3ef5236c25e4f7f4b39384ceb067561558c9086da14b18.scope - libcontainer container 652fffa2d414f160be3ef5236c25e4f7f4b39384ceb067561558c9086da14b18. Jan 24 11:59:31.031000 audit: BPF prog-id=81 op=LOAD Jan 24 11:59:31.037000 audit: BPF prog-id=82 op=LOAD Jan 24 11:59:31.037000 audit[2580]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2556 pid=2580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:31.037000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563663333326633396265643166336265623463636139616561366339 Jan 24 11:59:31.039000 audit: BPF prog-id=82 op=UNLOAD Jan 24 11:59:31.039000 audit[2580]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2556 pid=2580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:31.039000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563663333326633396265643166336265623463636139616561366339 Jan 24 11:59:31.041000 audit: BPF prog-id=83 op=LOAD Jan 24 11:59:31.041000 audit[2580]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2556 pid=2580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:31.041000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563663333326633396265643166336265623463636139616561366339 Jan 24 11:59:31.042000 audit: BPF prog-id=84 op=LOAD Jan 24 11:59:31.042000 audit[2580]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2556 pid=2580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:31.042000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563663333326633396265643166336265623463636139616561366339 Jan 24 11:59:31.042000 audit: BPF prog-id=84 op=UNLOAD Jan 24 11:59:31.042000 audit[2580]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2556 pid=2580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:31.042000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563663333326633396265643166336265623463636139616561366339 Jan 24 11:59:31.042000 audit: BPF prog-id=83 op=UNLOAD Jan 24 11:59:31.042000 audit[2580]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2556 pid=2580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:31.042000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563663333326633396265643166336265623463636139616561366339 Jan 24 11:59:31.043000 audit: BPF prog-id=85 op=LOAD Jan 24 11:59:31.043000 audit[2580]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2556 pid=2580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:31.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563663333326633396265643166336265623463636139616561366339 Jan 24 11:59:31.049000 audit: BPF prog-id=86 op=LOAD Jan 24 11:59:31.051000 audit: BPF prog-id=87 op=LOAD Jan 24 11:59:31.051000 audit[2584]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=2551 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:31.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338303439303938323837643330376137613138656162663361383134 Jan 24 11:59:31.051000 audit: BPF prog-id=87 op=UNLOAD Jan 24 11:59:31.051000 audit[2584]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2551 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:31.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338303439303938323837643330376137613138656162663361383134 Jan 24 11:59:31.052000 audit: BPF prog-id=88 op=LOAD Jan 24 11:59:31.052000 audit[2584]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=2551 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:31.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338303439303938323837643330376137613138656162663361383134 Jan 24 11:59:31.052000 audit: BPF prog-id=89 op=LOAD Jan 24 11:59:31.052000 audit[2584]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=2551 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:31.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338303439303938323837643330376137613138656162663361383134 Jan 24 11:59:31.052000 audit: BPF prog-id=89 op=UNLOAD Jan 24 11:59:31.052000 audit[2584]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2551 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:31.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338303439303938323837643330376137613138656162663361383134 Jan 24 11:59:31.052000 audit: BPF prog-id=88 op=UNLOAD Jan 24 11:59:31.052000 audit[2584]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2551 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:31.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338303439303938323837643330376137613138656162663361383134 Jan 24 11:59:31.053000 audit: BPF prog-id=90 op=LOAD Jan 24 11:59:31.053000 audit[2584]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=2551 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:31.053000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338303439303938323837643330376137613138656162663361383134 Jan 24 11:59:31.057000 audit: BPF prog-id=91 op=LOAD Jan 24 11:59:31.059000 audit: BPF prog-id=92 op=LOAD Jan 24 11:59:31.059000 audit[2606]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2575 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:31.059000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635326666666132643431346631363062653365663532333663323565 Jan 24 11:59:31.059000 audit: BPF prog-id=92 op=UNLOAD Jan 24 11:59:31.059000 audit[2606]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2575 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:31.059000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635326666666132643431346631363062653365663532333663323565 Jan 24 11:59:31.060000 audit: BPF prog-id=93 op=LOAD Jan 24 11:59:31.060000 audit[2606]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2575 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:31.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635326666666132643431346631363062653365663532333663323565 Jan 24 11:59:31.060000 audit: BPF prog-id=94 op=LOAD Jan 24 11:59:31.060000 audit[2606]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2575 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:31.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635326666666132643431346631363062653365663532333663323565 Jan 24 11:59:31.060000 audit: BPF prog-id=94 op=UNLOAD Jan 24 11:59:31.060000 audit[2606]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2575 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:31.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635326666666132643431346631363062653365663532333663323565 Jan 24 11:59:31.062000 audit: BPF prog-id=93 op=UNLOAD Jan 24 11:59:31.062000 audit[2606]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2575 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:31.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635326666666132643431346631363062653365663532333663323565 Jan 24 11:59:31.062000 audit: BPF prog-id=95 op=LOAD Jan 24 11:59:31.062000 audit[2606]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2575 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:31.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635326666666132643431346631363062653365663532333663323565 Jan 24 11:59:31.209832 containerd[1652]: time="2026-01-24T11:59:31.209384548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"ecf332f39bed1f3beb4cca9aea6c9b946ba9fdc568766fdfa96d44e654d8ccac\"" Jan 24 11:59:31.249700 kubelet[2488]: E0124 11:59:31.244318 2488 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 11:59:31.261700 containerd[1652]: time="2026-01-24T11:59:31.260853687Z" level=info msg="CreateContainer within sandbox \"ecf332f39bed1f3beb4cca9aea6c9b946ba9fdc568766fdfa96d44e654d8ccac\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 24 11:59:31.262933 containerd[1652]: time="2026-01-24T11:59:31.262892370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cf37f17bfb3fd93cd87e89acdc8f69e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"38049098287d307a7a18eabf3a814f9cdacd8d922167e511e052355178b4e493\"" Jan 24 11:59:31.264255 kubelet[2488]: E0124 11:59:31.264200 2488 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 11:59:31.267459 containerd[1652]: time="2026-01-24T11:59:31.267382551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,} returns sandbox id \"652fffa2d414f160be3ef5236c25e4f7f4b39384ceb067561558c9086da14b18\"" Jan 24 11:59:31.271138 kubelet[2488]: E0124 11:59:31.270538 2488 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 11:59:31.273500 containerd[1652]: time="2026-01-24T11:59:31.273393128Z" level=info msg="CreateContainer within sandbox \"38049098287d307a7a18eabf3a814f9cdacd8d922167e511e052355178b4e493\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 24 11:59:31.281184 containerd[1652]: time="2026-01-24T11:59:31.281133289Z" level=info msg="CreateContainer within sandbox \"652fffa2d414f160be3ef5236c25e4f7f4b39384ceb067561558c9086da14b18\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 24 11:59:31.329821 containerd[1652]: time="2026-01-24T11:59:31.328650861Z" level=info msg="Container 3ab15724639d231409d3dd6fabcdc94cc0090de274231494cccb3384cb40e8ee: CDI devices from CRI Config.CDIDevices: []" Jan 24 11:59:31.334422 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4068441721.mount: Deactivated successfully. Jan 24 11:59:31.349699 containerd[1652]: time="2026-01-24T11:59:31.345866481Z" level=info msg="Container 1d70dd60b170f0c10c0b60c9bbbee3892605f093b204fce9f5cf6ef4823c408b: CDI devices from CRI Config.CDIDevices: []" Jan 24 11:59:31.365363 containerd[1652]: time="2026-01-24T11:59:31.365161929Z" level=info msg="Container 60f38356cca0e92cf7b20c710585bc5fa10cb28329fc3b4218de2d3ab1d5b014: CDI devices from CRI Config.CDIDevices: []" Jan 24 11:59:31.382359 containerd[1652]: time="2026-01-24T11:59:31.382237314Z" level=info msg="CreateContainer within sandbox \"ecf332f39bed1f3beb4cca9aea6c9b946ba9fdc568766fdfa96d44e654d8ccac\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3ab15724639d231409d3dd6fabcdc94cc0090de274231494cccb3384cb40e8ee\"" Jan 24 11:59:31.383490 containerd[1652]: time="2026-01-24T11:59:31.383408440Z" level=info msg="StartContainer for \"3ab15724639d231409d3dd6fabcdc94cc0090de274231494cccb3384cb40e8ee\"" Jan 24 11:59:31.391296 containerd[1652]: time="2026-01-24T11:59:31.391069638Z" level=info msg="connecting to shim 3ab15724639d231409d3dd6fabcdc94cc0090de274231494cccb3384cb40e8ee" address="unix:///run/containerd/s/c1d1a57834b905fa1c186be3117843b30911256b26b5d928de3f096b7d051063" protocol=ttrpc version=3 Jan 24 11:59:31.394120 containerd[1652]: time="2026-01-24T11:59:31.394005440Z" level=info msg="CreateContainer within sandbox \"652fffa2d414f160be3ef5236c25e4f7f4b39384ceb067561558c9086da14b18\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1d70dd60b170f0c10c0b60c9bbbee3892605f093b204fce9f5cf6ef4823c408b\"" Jan 24 11:59:31.401379 containerd[1652]: time="2026-01-24T11:59:31.401286453Z" level=info msg="StartContainer for \"1d70dd60b170f0c10c0b60c9bbbee3892605f093b204fce9f5cf6ef4823c408b\"" Jan 24 11:59:31.403630 containerd[1652]: time="2026-01-24T11:59:31.402975290Z" level=info msg="connecting to shim 1d70dd60b170f0c10c0b60c9bbbee3892605f093b204fce9f5cf6ef4823c408b" address="unix:///run/containerd/s/2c4fb6312d02ba59b503f6bd14298c43738f61851418152903589004bd146307" protocol=ttrpc version=3 Jan 24 11:59:31.405359 containerd[1652]: time="2026-01-24T11:59:31.405235954Z" level=info msg="CreateContainer within sandbox \"38049098287d307a7a18eabf3a814f9cdacd8d922167e511e052355178b4e493\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"60f38356cca0e92cf7b20c710585bc5fa10cb28329fc3b4218de2d3ab1d5b014\"" Jan 24 11:59:31.407865 containerd[1652]: time="2026-01-24T11:59:31.407836163Z" level=info msg="StartContainer for \"60f38356cca0e92cf7b20c710585bc5fa10cb28329fc3b4218de2d3ab1d5b014\"" Jan 24 11:59:31.410803 containerd[1652]: time="2026-01-24T11:59:31.410771752Z" level=info msg="connecting to shim 60f38356cca0e92cf7b20c710585bc5fa10cb28329fc3b4218de2d3ab1d5b014" address="unix:///run/containerd/s/f0f70f45ffce4d9b1f3f9fab3938851c51036300b0ab8be40c369a1fdd23299c" protocol=ttrpc version=3 Jan 24 11:59:31.449883 systemd[1]: Started cri-containerd-3ab15724639d231409d3dd6fabcdc94cc0090de274231494cccb3384cb40e8ee.scope - libcontainer container 3ab15724639d231409d3dd6fabcdc94cc0090de274231494cccb3384cb40e8ee. Jan 24 11:59:31.483135 systemd[1]: Started cri-containerd-1d70dd60b170f0c10c0b60c9bbbee3892605f093b204fce9f5cf6ef4823c408b.scope - libcontainer container 1d70dd60b170f0c10c0b60c9bbbee3892605f093b204fce9f5cf6ef4823c408b. Jan 24 11:59:31.516207 systemd[1]: Started cri-containerd-60f38356cca0e92cf7b20c710585bc5fa10cb28329fc3b4218de2d3ab1d5b014.scope - libcontainer container 60f38356cca0e92cf7b20c710585bc5fa10cb28329fc3b4218de2d3ab1d5b014. Jan 24 11:59:31.531000 audit: BPF prog-id=96 op=LOAD Jan 24 11:59:31.534000 audit: BPF prog-id=97 op=LOAD Jan 24 11:59:31.534000 audit[2672]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2556 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:31.534000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361623135373234363339643233313430396433646436666162636463 Jan 24 11:59:31.534000 audit: BPF prog-id=97 op=UNLOAD Jan 24 11:59:31.534000 audit[2672]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2556 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:31.534000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361623135373234363339643233313430396433646436666162636463 Jan 24 11:59:31.534000 audit: BPF prog-id=98 op=LOAD Jan 24 11:59:31.534000 audit[2672]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2556 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:31.534000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361623135373234363339643233313430396433646436666162636463 Jan 24 11:59:31.534000 audit: BPF prog-id=99 op=LOAD Jan 24 11:59:31.534000 audit[2672]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2556 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:31.534000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361623135373234363339643233313430396433646436666162636463 Jan 24 11:59:31.534000 audit: BPF prog-id=99 op=UNLOAD Jan 24 11:59:31.534000 audit[2672]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2556 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:31.534000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361623135373234363339643233313430396433646436666162636463 Jan 24 11:59:31.534000 audit: BPF prog-id=98 op=UNLOAD Jan 24 11:59:31.534000 audit[2672]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2556 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:31.534000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361623135373234363339643233313430396433646436666162636463 Jan 24 11:59:31.534000 audit: BPF prog-id=100 op=LOAD Jan 24 11:59:31.534000 audit[2672]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2556 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:31.534000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361623135373234363339643233313430396433646436666162636463 Jan 24 11:59:31.548000 audit: BPF prog-id=101 op=LOAD Jan 24 11:59:31.549000 audit: BPF prog-id=102 op=LOAD Jan 24 11:59:31.549000 audit[2678]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2575 pid=2678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:31.549000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164373064643630623137306630633130633062363063396262626565 Jan 24 11:59:31.549000 audit: BPF prog-id=102 op=UNLOAD Jan 24 11:59:31.549000 audit[2678]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2575 pid=2678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:31.549000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164373064643630623137306630633130633062363063396262626565 Jan 24 11:59:31.550000 audit: BPF prog-id=103 op=LOAD Jan 24 11:59:31.550000 audit[2678]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2575 pid=2678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:31.550000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164373064643630623137306630633130633062363063396262626565 Jan 24 11:59:31.550000 audit: BPF prog-id=104 op=LOAD Jan 24 11:59:31.550000 audit[2678]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2575 pid=2678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:31.550000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164373064643630623137306630633130633062363063396262626565 Jan 24 11:59:31.550000 audit: BPF prog-id=104 op=UNLOAD Jan 24 11:59:31.550000 audit[2678]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2575 pid=2678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:31.550000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164373064643630623137306630633130633062363063396262626565 Jan 24 11:59:31.550000 audit: BPF prog-id=103 op=UNLOAD Jan 24 11:59:31.550000 audit[2678]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2575 pid=2678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:31.550000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164373064643630623137306630633130633062363063396262626565 Jan 24 11:59:31.550000 audit: BPF prog-id=105 op=LOAD Jan 24 11:59:31.550000 audit[2678]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2575 pid=2678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:31.550000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164373064643630623137306630633130633062363063396262626565 Jan 24 11:59:31.551678 kubelet[2488]: E0124 11:59:31.550950 2488 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 24 11:59:31.570000 audit: BPF prog-id=106 op=LOAD Jan 24 11:59:31.572000 audit: BPF prog-id=107 op=LOAD Jan 24 11:59:31.572000 audit[2679]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2551 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:31.572000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630663338333536636361306539326366376232306337313035383562 Jan 24 11:59:31.572000 audit: BPF prog-id=107 op=UNLOAD Jan 24 11:59:31.572000 audit[2679]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2551 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:31.572000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630663338333536636361306539326366376232306337313035383562 Jan 24 11:59:31.572000 audit: BPF prog-id=108 op=LOAD Jan 24 11:59:31.572000 audit[2679]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2551 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:31.572000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630663338333536636361306539326366376232306337313035383562 Jan 24 11:59:31.572000 audit: BPF prog-id=109 op=LOAD Jan 24 11:59:31.572000 audit[2679]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2551 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:31.572000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630663338333536636361306539326366376232306337313035383562 Jan 24 11:59:31.572000 audit: BPF prog-id=109 op=UNLOAD Jan 24 11:59:31.572000 audit[2679]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2551 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:31.572000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630663338333536636361306539326366376232306337313035383562 Jan 24 11:59:31.572000 audit: BPF prog-id=108 op=UNLOAD Jan 24 11:59:31.572000 audit[2679]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2551 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:31.572000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630663338333536636361306539326366376232306337313035383562 Jan 24 11:59:31.572000 audit: BPF prog-id=110 op=LOAD Jan 24 11:59:31.572000 audit[2679]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2551 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 11:59:31.572000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630663338333536636361306539326366376232306337313035383562 Jan 24 11:59:31.654766 kubelet[2488]: E0124 11:59:31.654689 2488 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.100:6443: connect: connection refused" interval="3.2s" Jan 24 11:59:31.664337 containerd[1652]: time="2026-01-24T11:59:31.664282074Z" level=info msg="StartContainer for \"1d70dd60b170f0c10c0b60c9bbbee3892605f093b204fce9f5cf6ef4823c408b\" returns successfully" Jan 24 11:59:31.666814 containerd[1652]: time="2026-01-24T11:59:31.666724066Z" level=info msg="StartContainer for \"3ab15724639d231409d3dd6fabcdc94cc0090de274231494cccb3384cb40e8ee\" returns successfully" Jan 24 11:59:31.685869 containerd[1652]: time="2026-01-24T11:59:31.685354224Z" level=info msg="StartContainer for \"60f38356cca0e92cf7b20c710585bc5fa10cb28329fc3b4218de2d3ab1d5b014\" returns successfully" Jan 24 11:59:31.736868 kubelet[2488]: E0124 11:59:31.736680 2488 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 24 11:59:31.969621 kubelet[2488]: E0124 11:59:31.969146 2488 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 11:59:31.969621 kubelet[2488]: E0124 11:59:31.969430 2488 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 11:59:31.984942 kubelet[2488]: E0124 11:59:31.984399 2488 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 11:59:31.984942 kubelet[2488]: E0124 11:59:31.984760 2488 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 11:59:32.244510 kubelet[2488]: E0124 11:59:32.186915 2488 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 11:59:32.561047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount970435193.mount: Deactivated successfully. Jan 24 11:59:32.653299 kubelet[2488]: I0124 11:59:32.576466 2488 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 11:59:32.653299 kubelet[2488]: E0124 11:59:32.652187 2488 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 11:59:33.216706 kubelet[2488]: E0124 11:59:33.215979 2488 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 11:59:33.219410 kubelet[2488]: E0124 11:59:33.217880 2488 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 11:59:33.219410 kubelet[2488]: E0124 11:59:33.218697 2488 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 11:59:33.219410 kubelet[2488]: E0124 11:59:33.218928 2488 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 11:59:34.247206 kubelet[2488]: E0124 11:59:34.246679 2488 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 11:59:34.252223 kubelet[2488]: E0124 11:59:34.248337 2488 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 11:59:37.969911 kubelet[2488]: E0124 11:59:37.967384 2488 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 11:59:37.969911 kubelet[2488]: E0124 11:59:37.968528 2488 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 11:59:38.967959 kubelet[2488]: E0124 11:59:38.962941 2488 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 24 11:59:41.882918 kubelet[2488]: E0124 11:59:41.878065 2488 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 11:59:41.882918 kubelet[2488]: E0124 11:59:41.881741 2488 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 11:59:42.678796 kubelet[2488]: E0124 11:59:42.674107 2488 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.100:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Jan 24 11:59:42.678796 kubelet[2488]: E0124 11:59:42.674216 2488 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 24 11:59:42.678796 kubelet[2488]: E0124 11:59:42.674675 2488 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 24 11:59:43.797509 kubelet[2488]: E0124 11:59:43.795010 2488 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 24 11:59:43.910036 kubelet[2488]: E0124 11:59:43.906340 2488 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 11:59:43.934771 kubelet[2488]: E0124 11:59:43.934695 2488 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 11:59:43.956493 kubelet[2488]: E0124 11:59:43.955779 2488 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188da8edf2251b6f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-24 11:59:28.549706607 +0000 UTC m=+3.118252155,LastTimestamp:2026-01-24 11:59:28.549706607 +0000 UTC m=+3.118252155,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 24 11:59:44.428990 kubelet[2488]: E0124 11:59:44.386525 2488 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188da8edf4bbbbd2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-24 11:59:28.593132498 +0000 UTC m=+3.161678025,LastTimestamp:2026-01-24 11:59:28.593132498 +0000 UTC m=+3.161678025,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 24 11:59:44.543871 kubelet[2488]: E0124 11:59:44.541152 2488 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188da8ee03d6229c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-24 11:59:28.846520988 +0000 UTC m=+3.415066526,LastTimestamp:2026-01-24 11:59:28.846520988 +0000 UTC m=+3.415066526,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 24 11:59:44.554839 kubelet[2488]: I0124 11:59:44.553879 2488 apiserver.go:52] "Watching apiserver" Jan 24 11:59:44.591974 kubelet[2488]: I0124 11:59:44.591660 2488 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 24 11:59:44.746304 kubelet[2488]: E0124 11:59:44.744422 2488 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 24 11:59:45.529995 kubelet[2488]: E0124 11:59:45.528670 2488 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 24 11:59:45.926165 kubelet[2488]: I0124 11:59:45.925663 2488 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 11:59:45.985678 kubelet[2488]: I0124 11:59:45.984680 2488 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 24 11:59:45.994964 kubelet[2488]: I0124 11:59:45.994842 2488 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 24 11:59:46.062506 kubelet[2488]: I0124 11:59:46.061198 2488 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 24 11:59:46.069658 kubelet[2488]: E0124 11:59:46.069297 2488 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 11:59:46.132650 kubelet[2488]: I0124 11:59:46.131182 2488 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 24 11:59:46.132650 kubelet[2488]: E0124 11:59:46.135518 2488 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 11:59:46.194840 kubelet[2488]: E0124 11:59:46.191368 2488 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 11:59:47.974438 kubelet[2488]: E0124 11:59:47.974202 2488 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 11:59:48.054585 kubelet[2488]: I0124 11:59:48.054244 2488 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.054183249 podStartE2EDuration="2.054183249s" podCreationTimestamp="2026-01-24 11:59:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 11:59:48.0464031 +0000 UTC m=+22.614948628" watchObservedRunningTime="2026-01-24 11:59:48.054183249 +0000 UTC m=+22.622728796" Jan 24 11:59:48.138367 kubelet[2488]: I0124 11:59:48.138157 2488 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.138139535 podStartE2EDuration="2.138139535s" podCreationTimestamp="2026-01-24 11:59:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 11:59:48.138056103 +0000 UTC m=+22.706601651" watchObservedRunningTime="2026-01-24 11:59:48.138139535 +0000 UTC m=+22.706685083" Jan 24 11:59:48.138367 kubelet[2488]: I0124 11:59:48.138279 2488 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.138274655 podStartE2EDuration="2.138274655s" podCreationTimestamp="2026-01-24 11:59:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 11:59:48.082630686 +0000 UTC m=+22.651176244" watchObservedRunningTime="2026-01-24 11:59:48.138274655 +0000 UTC m=+22.706820182" Jan 24 11:59:51.851682 kubelet[2488]: E0124 11:59:51.851363 2488 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 11:59:51.883326 systemd[1]: Reload requested from client PID 2785 ('systemctl') (unit session-8.scope)... Jan 24 11:59:51.883371 systemd[1]: Reloading... Jan 24 11:59:52.129638 zram_generator::config[2830]: No configuration found. Jan 24 11:59:52.643403 systemd[1]: Reloading finished in 756 ms. Jan 24 11:59:52.692651 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 11:59:52.706745 systemd[1]: kubelet.service: Deactivated successfully. Jan 24 11:59:52.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:59:52.707510 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 11:59:52.707864 systemd[1]: kubelet.service: Consumed 8.786s CPU time, 128.4M memory peak. Jan 24 11:59:52.711600 kernel: kauditd_printk_skb: 158 callbacks suppressed Jan 24 11:59:52.711696 kernel: audit: type=1131 audit(1769255992.707:389): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:59:52.722756 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 11:59:52.729000 audit: BPF prog-id=111 op=LOAD Jan 24 11:59:52.730000 audit: BPF prog-id=112 op=LOAD Jan 24 11:59:52.737725 kernel: audit: type=1334 audit(1769255992.729:390): prog-id=111 op=LOAD Jan 24 11:59:52.737848 kernel: audit: type=1334 audit(1769255992.730:391): prog-id=112 op=LOAD Jan 24 11:59:52.737895 kernel: audit: type=1334 audit(1769255992.730:392): prog-id=64 op=UNLOAD Jan 24 11:59:52.730000 audit: BPF prog-id=64 op=UNLOAD Jan 24 11:59:52.740868 kernel: audit: type=1334 audit(1769255992.730:393): prog-id=65 op=UNLOAD Jan 24 11:59:52.730000 audit: BPF prog-id=65 op=UNLOAD Jan 24 11:59:52.750000 audit: BPF prog-id=113 op=LOAD Jan 24 11:59:52.750000 audit: BPF prog-id=77 op=UNLOAD Jan 24 11:59:52.756229 kernel: audit: type=1334 audit(1769255992.750:394): prog-id=113 op=LOAD Jan 24 11:59:52.756291 kernel: audit: type=1334 audit(1769255992.750:395): prog-id=77 op=UNLOAD Jan 24 11:59:52.756329 kernel: audit: type=1334 audit(1769255992.750:396): prog-id=114 op=LOAD Jan 24 11:59:52.750000 audit: BPF prog-id=114 op=LOAD Jan 24 11:59:52.758883 kernel: audit: type=1334 audit(1769255992.750:397): prog-id=115 op=LOAD Jan 24 11:59:52.750000 audit: BPF prog-id=115 op=LOAD Jan 24 11:59:52.750000 audit: BPF prog-id=78 op=UNLOAD Jan 24 11:59:52.763640 kernel: audit: type=1334 audit(1769255992.750:398): prog-id=78 op=UNLOAD Jan 24 11:59:52.750000 audit: BPF prog-id=79 op=UNLOAD Jan 24 11:59:52.752000 audit: BPF prog-id=116 op=LOAD Jan 24 11:59:52.753000 audit: BPF prog-id=61 op=UNLOAD Jan 24 11:59:52.753000 audit: BPF prog-id=117 op=LOAD Jan 24 11:59:52.753000 audit: BPF prog-id=118 op=LOAD Jan 24 11:59:52.753000 audit: BPF prog-id=62 op=UNLOAD Jan 24 11:59:52.753000 audit: BPF prog-id=63 op=UNLOAD Jan 24 11:59:52.754000 audit: BPF prog-id=119 op=LOAD Jan 24 11:59:52.754000 audit: BPF prog-id=74 op=UNLOAD Jan 24 11:59:52.754000 audit: BPF prog-id=120 op=LOAD Jan 24 11:59:52.754000 audit: BPF prog-id=121 op=LOAD Jan 24 11:59:52.754000 audit: BPF prog-id=75 op=UNLOAD Jan 24 11:59:52.754000 audit: BPF prog-id=76 op=UNLOAD Jan 24 11:59:52.755000 audit: BPF prog-id=122 op=LOAD Jan 24 11:59:52.755000 audit: BPF prog-id=70 op=UNLOAD Jan 24 11:59:52.755000 audit: BPF prog-id=123 op=LOAD Jan 24 11:59:52.755000 audit: BPF prog-id=124 op=LOAD Jan 24 11:59:52.755000 audit: BPF prog-id=71 op=UNLOAD Jan 24 11:59:52.755000 audit: BPF prog-id=72 op=UNLOAD Jan 24 11:59:52.757000 audit: BPF prog-id=125 op=LOAD Jan 24 11:59:52.757000 audit: BPF prog-id=66 op=UNLOAD Jan 24 11:59:52.757000 audit: BPF prog-id=126 op=LOAD Jan 24 11:59:52.757000 audit: BPF prog-id=127 op=LOAD Jan 24 11:59:52.757000 audit: BPF prog-id=67 op=UNLOAD Jan 24 11:59:52.757000 audit: BPF prog-id=68 op=UNLOAD Jan 24 11:59:52.758000 audit: BPF prog-id=128 op=LOAD Jan 24 11:59:52.758000 audit: BPF prog-id=80 op=UNLOAD Jan 24 11:59:52.760000 audit: BPF prog-id=129 op=LOAD Jan 24 11:59:52.760000 audit: BPF prog-id=69 op=UNLOAD Jan 24 11:59:52.761000 audit: BPF prog-id=130 op=LOAD Jan 24 11:59:52.761000 audit: BPF prog-id=73 op=UNLOAD Jan 24 11:59:53.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 11:59:53.126392 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 11:59:53.146319 (kubelet)[2876]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 11:59:53.306705 kubelet[2876]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 11:59:53.306705 kubelet[2876]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 11:59:53.325466 kubelet[2876]: I0124 11:59:53.306445 2876 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 11:59:53.380458 kubelet[2876]: I0124 11:59:53.379848 2876 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 24 11:59:53.380458 kubelet[2876]: I0124 11:59:53.379922 2876 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 11:59:53.380458 kubelet[2876]: I0124 11:59:53.379996 2876 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 24 11:59:53.380458 kubelet[2876]: I0124 11:59:53.380014 2876 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 11:59:53.381368 kubelet[2876]: I0124 11:59:53.380860 2876 server.go:956] "Client rotation is on, will bootstrap in background" Jan 24 11:59:53.383250 kubelet[2876]: I0124 11:59:53.382874 2876 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 24 11:59:53.436999 kubelet[2876]: I0124 11:59:53.436432 2876 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 11:59:53.567714 kubelet[2876]: I0124 11:59:53.567501 2876 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 24 11:59:53.604702 kubelet[2876]: I0124 11:59:53.604406 2876 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 24 11:59:53.606392 kubelet[2876]: I0124 11:59:53.605330 2876 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 11:59:53.606392 kubelet[2876]: I0124 11:59:53.605365 2876 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 11:59:53.606392 kubelet[2876]: I0124 11:59:53.605627 2876 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 11:59:53.606392 kubelet[2876]: I0124 11:59:53.605647 2876 container_manager_linux.go:306] "Creating device plugin manager" Jan 24 11:59:53.607517 kubelet[2876]: I0124 11:59:53.605695 2876 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 24 11:59:53.629997 kubelet[2876]: I0124 11:59:53.628818 2876 state_mem.go:36] "Initialized new in-memory state store" Jan 24 11:59:53.629997 kubelet[2876]: I0124 11:59:53.630154 2876 kubelet.go:475] "Attempting to sync node with API server" Jan 24 11:59:53.629997 kubelet[2876]: I0124 11:59:53.630240 2876 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 11:59:53.629997 kubelet[2876]: I0124 11:59:53.630273 2876 kubelet.go:387] "Adding apiserver pod source" Jan 24 11:59:53.629997 kubelet[2876]: I0124 11:59:53.630327 2876 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 11:59:53.645377 kubelet[2876]: I0124 11:59:53.644054 2876 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 24 11:59:53.645377 kubelet[2876]: I0124 11:59:53.645051 2876 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 24 11:59:53.645377 kubelet[2876]: I0124 11:59:53.645091 2876 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 24 11:59:53.704375 kubelet[2876]: I0124 11:59:53.703904 2876 server.go:1262] "Started kubelet" Jan 24 11:59:53.705701 kubelet[2876]: I0124 11:59:53.704412 2876 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 11:59:53.708069 kubelet[2876]: I0124 11:59:53.707845 2876 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 11:59:53.708069 kubelet[2876]: I0124 11:59:53.707983 2876 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 24 11:59:53.728141 kubelet[2876]: I0124 11:59:53.726795 2876 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 11:59:53.731030 kubelet[2876]: I0124 11:59:53.730614 2876 server.go:310] "Adding debug handlers to kubelet server" Jan 24 11:59:53.733004 kubelet[2876]: I0124 11:59:53.732969 2876 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 11:59:53.737605 kubelet[2876]: I0124 11:59:53.735680 2876 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 11:59:53.737605 kubelet[2876]: I0124 11:59:53.736016 2876 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 24 11:59:53.737605 kubelet[2876]: I0124 11:59:53.737232 2876 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 24 11:59:53.737605 kubelet[2876]: I0124 11:59:53.737518 2876 reconciler.go:29] "Reconciler: start to sync state" Jan 24 11:59:53.765601 kubelet[2876]: I0124 11:59:53.765001 2876 factory.go:223] Registration of the systemd container factory successfully Jan 24 11:59:53.770802 kubelet[2876]: I0124 11:59:53.770643 2876 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 11:59:53.780462 kubelet[2876]: E0124 11:59:53.780294 2876 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 11:59:53.782774 kubelet[2876]: I0124 11:59:53.779656 2876 factory.go:223] Registration of the containerd container factory successfully Jan 24 11:59:53.856619 kubelet[2876]: I0124 11:59:53.856144 2876 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 24 11:59:53.887091 kubelet[2876]: I0124 11:59:53.886033 2876 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 24 11:59:53.887091 kubelet[2876]: I0124 11:59:53.886094 2876 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 24 11:59:53.887091 kubelet[2876]: I0124 11:59:53.886674 2876 kubelet.go:2427] "Starting kubelet main sync loop" Jan 24 11:59:53.887091 kubelet[2876]: E0124 11:59:53.886875 2876 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 11:59:53.965268 kubelet[2876]: I0124 11:59:53.963242 2876 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 11:59:53.965268 kubelet[2876]: I0124 11:59:53.963271 2876 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 11:59:53.965268 kubelet[2876]: I0124 11:59:53.963295 2876 state_mem.go:36] "Initialized new in-memory state store" Jan 24 11:59:53.965268 kubelet[2876]: I0124 11:59:53.963692 2876 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 24 11:59:53.965268 kubelet[2876]: I0124 11:59:53.963708 2876 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 24 11:59:53.965268 kubelet[2876]: I0124 11:59:53.963795 2876 policy_none.go:49] "None policy: Start" Jan 24 11:59:53.965268 kubelet[2876]: I0124 11:59:53.963812 2876 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 24 11:59:53.965268 kubelet[2876]: I0124 11:59:53.963831 2876 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 24 11:59:53.965268 kubelet[2876]: I0124 11:59:53.963970 2876 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 24 11:59:53.965268 kubelet[2876]: I0124 11:59:53.963985 2876 policy_none.go:47] "Start" Jan 24 11:59:54.002457 kubelet[2876]: E0124 11:59:53.989985 2876 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 24 11:59:54.144686 kubelet[2876]: E0124 11:59:54.143511 2876 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 24 11:59:54.148622 kubelet[2876]: I0124 11:59:54.146288 2876 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 11:59:54.148906 kubelet[2876]: I0124 11:59:54.148793 2876 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 11:59:54.155075 kubelet[2876]: I0124 11:59:54.154068 2876 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 11:59:54.155075 kubelet[2876]: E0124 11:59:54.154070 2876 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 11:59:54.277187 kubelet[2876]: I0124 11:59:54.276432 2876 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 24 11:59:54.281047 kubelet[2876]: I0124 11:59:54.279669 2876 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 24 11:59:54.282340 kubelet[2876]: I0124 11:59:54.282311 2876 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 24 11:59:54.305337 kubelet[2876]: E0124 11:59:54.305245 2876 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 24 11:59:54.329654 kubelet[2876]: E0124 11:59:54.329468 2876 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 24 11:59:54.330409 kubelet[2876]: E0124 11:59:54.329682 2876 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 24 11:59:54.350845 kubelet[2876]: I0124 11:59:54.350800 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 11:59:54.351136 kubelet[2876]: I0124 11:59:54.351030 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 11:59:54.351380 kubelet[2876]: I0124 11:59:54.351282 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 24 11:59:54.351531 kubelet[2876]: I0124 11:59:54.351508 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cf37f17bfb3fd93cd87e89acdc8f69e5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cf37f17bfb3fd93cd87e89acdc8f69e5\") " pod="kube-system/kube-apiserver-localhost" Jan 24 11:59:54.351910 kubelet[2876]: I0124 11:59:54.351889 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 11:59:54.352205 kubelet[2876]: I0124 11:59:54.352139 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 11:59:54.367053 kubelet[2876]: I0124 11:59:54.352483 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cf37f17bfb3fd93cd87e89acdc8f69e5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cf37f17bfb3fd93cd87e89acdc8f69e5\") " pod="kube-system/kube-apiserver-localhost" Jan 24 11:59:54.367344 kubelet[2876]: I0124 11:59:54.367223 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cf37f17bfb3fd93cd87e89acdc8f69e5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cf37f17bfb3fd93cd87e89acdc8f69e5\") " pod="kube-system/kube-apiserver-localhost" Jan 24 11:59:54.368814 kubelet[2876]: I0124 11:59:54.368436 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 11:59:54.379364 kubelet[2876]: I0124 11:59:54.379284 2876 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 11:59:54.440153 kubelet[2876]: I0124 11:59:54.440033 2876 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 24 11:59:54.440153 kubelet[2876]: I0124 11:59:54.440169 2876 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 24 11:59:54.608940 kubelet[2876]: E0124 11:59:54.607228 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 11:59:54.631136 kubelet[2876]: E0124 11:59:54.630692 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 11:59:54.631136 kubelet[2876]: E0124 11:59:54.631121 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 11:59:54.636527 kubelet[2876]: I0124 11:59:54.636412 2876 apiserver.go:52] "Watching apiserver" Jan 24 11:59:54.767331 kubelet[2876]: I0124 11:59:54.752179 2876 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 24 11:59:55.031122 kubelet[2876]: E0124 11:59:55.030444 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 11:59:55.031122 kubelet[2876]: E0124 11:59:55.030461 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 11:59:55.031122 kubelet[2876]: I0124 11:59:55.030817 2876 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 24 11:59:55.092173 kubelet[2876]: E0124 11:59:55.090298 2876 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 24 11:59:55.092173 kubelet[2876]: E0124 11:59:55.090755 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 11:59:56.169290 kubelet[2876]: E0124 11:59:56.168945 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 11:59:56.177928 kubelet[2876]: E0124 11:59:56.177820 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 11:59:57.478729 kubelet[2876]: E0124 11:59:57.478452 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 11:59:57.478729 kubelet[2876]: E0124 11:59:57.478936 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 11:59:58.264401 kubelet[2876]: I0124 11:59:58.264331 2876 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 24 11:59:58.281325 containerd[1652]: time="2026-01-24T11:59:58.280195435Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 24 11:59:58.284644 kubelet[2876]: I0124 11:59:58.283224 2876 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 24 11:59:59.215312 systemd[1]: Created slice kubepods-besteffort-pod3f33a501_e5c3_4af5_91f1_4af6a1f65cf9.slice - libcontainer container kubepods-besteffort-pod3f33a501_e5c3_4af5_91f1_4af6a1f65cf9.slice. Jan 24 11:59:59.284686 kubelet[2876]: I0124 11:59:59.280457 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k6b2\" (UniqueName: \"kubernetes.io/projected/3f33a501-e5c3-4af5-91f1-4af6a1f65cf9-kube-api-access-2k6b2\") pod \"kube-proxy-7ntqx\" (UID: \"3f33a501-e5c3-4af5-91f1-4af6a1f65cf9\") " pod="kube-system/kube-proxy-7ntqx" Jan 24 11:59:59.284686 kubelet[2876]: I0124 11:59:59.280517 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f33a501-e5c3-4af5-91f1-4af6a1f65cf9-lib-modules\") pod \"kube-proxy-7ntqx\" (UID: \"3f33a501-e5c3-4af5-91f1-4af6a1f65cf9\") " pod="kube-system/kube-proxy-7ntqx" Jan 24 11:59:59.284686 kubelet[2876]: I0124 11:59:59.280658 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3f33a501-e5c3-4af5-91f1-4af6a1f65cf9-kube-proxy\") pod \"kube-proxy-7ntqx\" (UID: \"3f33a501-e5c3-4af5-91f1-4af6a1f65cf9\") " pod="kube-system/kube-proxy-7ntqx" Jan 24 11:59:59.284686 kubelet[2876]: I0124 11:59:59.280678 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3f33a501-e5c3-4af5-91f1-4af6a1f65cf9-xtables-lock\") pod \"kube-proxy-7ntqx\" (UID: \"3f33a501-e5c3-4af5-91f1-4af6a1f65cf9\") " pod="kube-system/kube-proxy-7ntqx" Jan 24 11:59:59.446329 systemd[1]: Created slice kubepods-besteffort-pod5a8b2d94_dbf2_4ad7_9a56_4df61d9b37fe.slice - libcontainer container kubepods-besteffort-pod5a8b2d94_dbf2_4ad7_9a56_4df61d9b37fe.slice. Jan 24 11:59:59.595012 kubelet[2876]: I0124 11:59:59.594286 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5a8b2d94-dbf2-4ad7-9a56-4df61d9b37fe-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-dcbpc\" (UID: \"5a8b2d94-dbf2-4ad7-9a56-4df61d9b37fe\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-dcbpc" Jan 24 11:59:59.595012 kubelet[2876]: I0124 11:59:59.594460 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8kjw\" (UniqueName: \"kubernetes.io/projected/5a8b2d94-dbf2-4ad7-9a56-4df61d9b37fe-kube-api-access-j8kjw\") pod \"tigera-operator-65cdcdfd6d-dcbpc\" (UID: \"5a8b2d94-dbf2-4ad7-9a56-4df61d9b37fe\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-dcbpc" Jan 24 11:59:59.596850 kubelet[2876]: E0124 11:59:59.595383 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 11:59:59.605510 containerd[1652]: time="2026-01-24T11:59:59.604342693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7ntqx,Uid:3f33a501-e5c3-4af5-91f1-4af6a1f65cf9,Namespace:kube-system,Attempt:0,}" Jan 24 11:59:59.756949 containerd[1652]: time="2026-01-24T11:59:59.756616075Z" level=info msg="connecting to shim 9d18bb7621061bc7ae0dee1ef40ff17eba911141a352a4c814a50334b77e64ec" address="unix:///run/containerd/s/ce249c07604f335c31304a26aa175fa24a7e6e6feb0a96c57c0b6dc22b5cf0c4" namespace=k8s.io protocol=ttrpc version=3 Jan 24 11:59:59.758210 containerd[1652]: time="2026-01-24T11:59:59.758169424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-dcbpc,Uid:5a8b2d94-dbf2-4ad7-9a56-4df61d9b37fe,Namespace:tigera-operator,Attempt:0,}" Jan 24 11:59:59.973811 containerd[1652]: time="2026-01-24T11:59:59.973075482Z" level=info msg="connecting to shim 7f15c9a927714eb6bea2a8ed39926466edc036fd26285e03d78b6951cf8976c7" address="unix:///run/containerd/s/b5fec9d778ab8dee1e6550100d38019af9488dc3263c7edfd5f35002f7c11c90" namespace=k8s.io protocol=ttrpc version=3 Jan 24 11:59:59.999995 systemd[1]: Started cri-containerd-9d18bb7621061bc7ae0dee1ef40ff17eba911141a352a4c814a50334b77e64ec.scope - libcontainer container 9d18bb7621061bc7ae0dee1ef40ff17eba911141a352a4c814a50334b77e64ec. Jan 24 12:00:00.064000 audit: BPF prog-id=131 op=LOAD Jan 24 12:00:00.067510 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 24 12:00:00.067897 kernel: audit: type=1334 audit(1769256000.064:431): prog-id=131 op=LOAD Jan 24 12:00:00.073883 kernel: audit: type=1334 audit(1769256000.067:432): prog-id=132 op=LOAD Jan 24 12:00:00.073975 kernel: audit: type=1300 audit(1769256000.067:432): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2946 pid=2957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:00.067000 audit: BPF prog-id=132 op=LOAD Jan 24 12:00:00.067000 audit[2957]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2946 pid=2957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:00.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964313862623736323130363162633761653064656531656634306666 Jan 24 12:00:00.100370 kernel: audit: type=1327 audit(1769256000.067:432): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964313862623736323130363162633761653064656531656634306666 Jan 24 12:00:00.100466 kernel: audit: type=1334 audit(1769256000.068:433): prog-id=132 op=UNLOAD Jan 24 12:00:00.068000 audit: BPF prog-id=132 op=UNLOAD Jan 24 12:00:00.103766 kernel: audit: type=1300 audit(1769256000.068:433): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2946 pid=2957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:00.068000 audit[2957]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2946 pid=2957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:00.105112 systemd[1]: Started cri-containerd-7f15c9a927714eb6bea2a8ed39926466edc036fd26285e03d78b6951cf8976c7.scope - libcontainer container 7f15c9a927714eb6bea2a8ed39926466edc036fd26285e03d78b6951cf8976c7. Jan 24 12:00:00.068000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964313862623736323130363162633761653064656531656634306666 Jan 24 12:00:00.140074 kernel: audit: type=1327 audit(1769256000.068:433): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964313862623736323130363162633761653064656531656634306666 Jan 24 12:00:00.144257 kernel: audit: type=1334 audit(1769256000.068:434): prog-id=133 op=LOAD Jan 24 12:00:00.068000 audit: BPF prog-id=133 op=LOAD Jan 24 12:00:00.068000 audit[2957]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2946 pid=2957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:00.158671 kernel: audit: type=1300 audit(1769256000.068:434): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2946 pid=2957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:00.159209 kubelet[2876]: E0124 12:00:00.159133 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:00:00.068000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964313862623736323130363162633761653064656531656634306666 Jan 24 12:00:00.175641 kernel: audit: type=1327 audit(1769256000.068:434): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964313862623736323130363162633761653064656531656634306666 Jan 24 12:00:00.068000 audit: BPF prog-id=134 op=LOAD Jan 24 12:00:00.068000 audit[2957]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2946 pid=2957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:00.068000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964313862623736323130363162633761653064656531656634306666 Jan 24 12:00:00.068000 audit: BPF prog-id=134 op=UNLOAD Jan 24 12:00:00.068000 audit[2957]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2946 pid=2957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:00.068000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964313862623736323130363162633761653064656531656634306666 Jan 24 12:00:00.068000 audit: BPF prog-id=133 op=UNLOAD Jan 24 12:00:00.068000 audit[2957]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2946 pid=2957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:00.068000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964313862623736323130363162633761653064656531656634306666 Jan 24 12:00:00.069000 audit: BPF prog-id=135 op=LOAD Jan 24 12:00:00.069000 audit[2957]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2946 pid=2957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:00.069000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964313862623736323130363162633761653064656531656634306666 Jan 24 12:00:00.140000 audit: BPF prog-id=136 op=LOAD Jan 24 12:00:00.142000 audit: BPF prog-id=137 op=LOAD Jan 24 12:00:00.142000 audit[2988]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2967 pid=2988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:00.142000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766313563396139323737313465623662656132613865643339393236 Jan 24 12:00:00.142000 audit: BPF prog-id=137 op=UNLOAD Jan 24 12:00:00.142000 audit[2988]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2967 pid=2988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:00.142000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766313563396139323737313465623662656132613865643339393236 Jan 24 12:00:00.142000 audit: BPF prog-id=138 op=LOAD Jan 24 12:00:00.142000 audit[2988]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2967 pid=2988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:00.142000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766313563396139323737313465623662656132613865643339393236 Jan 24 12:00:00.143000 audit: BPF prog-id=139 op=LOAD Jan 24 12:00:00.143000 audit[2988]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2967 pid=2988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:00.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766313563396139323737313465623662656132613865643339393236 Jan 24 12:00:00.143000 audit: BPF prog-id=139 op=UNLOAD Jan 24 12:00:00.143000 audit[2988]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2967 pid=2988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:00.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766313563396139323737313465623662656132613865643339393236 Jan 24 12:00:00.143000 audit: BPF prog-id=138 op=UNLOAD Jan 24 12:00:00.143000 audit[2988]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2967 pid=2988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:00.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766313563396139323737313465623662656132613865643339393236 Jan 24 12:00:00.143000 audit: BPF prog-id=140 op=LOAD Jan 24 12:00:00.143000 audit[2988]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2967 pid=2988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:00.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766313563396139323737313465623662656132613865643339393236 Jan 24 12:00:00.271110 containerd[1652]: time="2026-01-24T12:00:00.267080746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7ntqx,Uid:3f33a501-e5c3-4af5-91f1-4af6a1f65cf9,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d18bb7621061bc7ae0dee1ef40ff17eba911141a352a4c814a50334b77e64ec\"" Jan 24 12:00:00.272511 kubelet[2876]: E0124 12:00:00.271682 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:00:00.293499 containerd[1652]: time="2026-01-24T12:00:00.293256304Z" level=info msg="CreateContainer within sandbox \"9d18bb7621061bc7ae0dee1ef40ff17eba911141a352a4c814a50334b77e64ec\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 24 12:00:00.326538 containerd[1652]: time="2026-01-24T12:00:00.326437561Z" level=info msg="Container 62edf2ba646a61806f4dd6ac2b604a889130b9cfa2d2df26fc2703491b1f7f75: CDI devices from CRI Config.CDIDevices: []" Jan 24 12:00:00.329615 containerd[1652]: time="2026-01-24T12:00:00.329465838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-dcbpc,Uid:5a8b2d94-dbf2-4ad7-9a56-4df61d9b37fe,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7f15c9a927714eb6bea2a8ed39926466edc036fd26285e03d78b6951cf8976c7\"" Jan 24 12:00:00.333277 containerd[1652]: time="2026-01-24T12:00:00.333237969Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 24 12:00:00.342719 containerd[1652]: time="2026-01-24T12:00:00.342677848Z" level=info msg="CreateContainer within sandbox \"9d18bb7621061bc7ae0dee1ef40ff17eba911141a352a4c814a50334b77e64ec\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"62edf2ba646a61806f4dd6ac2b604a889130b9cfa2d2df26fc2703491b1f7f75\"" Jan 24 12:00:00.343805 containerd[1652]: time="2026-01-24T12:00:00.343725262Z" level=info msg="StartContainer for \"62edf2ba646a61806f4dd6ac2b604a889130b9cfa2d2df26fc2703491b1f7f75\"" Jan 24 12:00:00.346776 containerd[1652]: time="2026-01-24T12:00:00.346693921Z" level=info msg="connecting to shim 62edf2ba646a61806f4dd6ac2b604a889130b9cfa2d2df26fc2703491b1f7f75" address="unix:///run/containerd/s/ce249c07604f335c31304a26aa175fa24a7e6e6feb0a96c57c0b6dc22b5cf0c4" protocol=ttrpc version=3 Jan 24 12:00:00.386906 systemd[1]: Started cri-containerd-62edf2ba646a61806f4dd6ac2b604a889130b9cfa2d2df26fc2703491b1f7f75.scope - libcontainer container 62edf2ba646a61806f4dd6ac2b604a889130b9cfa2d2df26fc2703491b1f7f75. Jan 24 12:00:00.443392 kubelet[2876]: E0124 12:00:00.443161 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:00:00.503000 audit: BPF prog-id=141 op=LOAD Jan 24 12:00:00.503000 audit[3028]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2946 pid=3028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:00.503000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632656466326261363436613631383036663464643661633262363034 Jan 24 12:00:00.503000 audit: BPF prog-id=142 op=LOAD Jan 24 12:00:00.503000 audit[3028]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2946 pid=3028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:00.503000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632656466326261363436613631383036663464643661633262363034 Jan 24 12:00:00.505000 audit: BPF prog-id=142 op=UNLOAD Jan 24 12:00:00.505000 audit[3028]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2946 pid=3028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:00.505000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632656466326261363436613631383036663464643661633262363034 Jan 24 12:00:00.505000 audit: BPF prog-id=141 op=UNLOAD Jan 24 12:00:00.505000 audit[3028]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2946 pid=3028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:00.505000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632656466326261363436613631383036663464643661633262363034 Jan 24 12:00:00.505000 audit: BPF prog-id=143 op=LOAD Jan 24 12:00:00.505000 audit[3028]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2946 pid=3028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:00.505000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632656466326261363436613631383036663464643661633262363034 Jan 24 12:00:00.604446 containerd[1652]: time="2026-01-24T12:00:00.597322412Z" level=info msg="StartContainer for \"62edf2ba646a61806f4dd6ac2b604a889130b9cfa2d2df26fc2703491b1f7f75\" returns successfully" Jan 24 12:00:01.074000 audit[3092]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3092 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:00:01.074000 audit[3092]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc9aa8fac0 a2=0 a3=7ffc9aa8faac items=0 ppid=3041 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.074000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 24 12:00:01.078000 audit[3093]: NETFILTER_CFG table=mangle:55 family=2 entries=1 op=nft_register_chain pid=3093 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:00:01.078000 audit[3093]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd290d1910 a2=0 a3=7ffd290d18fc items=0 ppid=3041 pid=3093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.078000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 24 12:00:01.078000 audit[3094]: NETFILTER_CFG table=nat:56 family=10 entries=1 op=nft_register_chain pid=3094 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:00:01.078000 audit[3094]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe13438f10 a2=0 a3=7ffe13438efc items=0 ppid=3041 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.078000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 24 12:00:01.081000 audit[3096]: NETFILTER_CFG table=filter:57 family=10 entries=1 op=nft_register_chain pid=3096 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:00:01.081000 audit[3096]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe28a773e0 a2=0 a3=7ffe28a773cc items=0 ppid=3041 pid=3096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.081000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 24 12:00:01.089000 audit[3099]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=3099 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:00:01.089000 audit[3099]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce72a14e0 a2=0 a3=7ffce72a14cc items=0 ppid=3041 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.089000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 24 12:00:01.101000 audit[3100]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3100 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:00:01.101000 audit[3100]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffce47c1100 a2=0 a3=7ffce47c10ec items=0 ppid=3041 pid=3100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.101000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 24 12:00:01.195000 audit[3101]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3101 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:00:01.195000 audit[3101]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc8a4a9890 a2=0 a3=7ffc8a4a987c items=0 ppid=3041 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.195000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 24 12:00:01.211000 audit[3103]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3103 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:00:01.211000 audit[3103]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc65e921a0 a2=0 a3=7ffc65e9218c items=0 ppid=3041 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.211000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73002D Jan 24 12:00:01.245000 audit[3106]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3106 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:00:01.245000 audit[3106]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffeb0adc3f0 a2=0 a3=7ffeb0adc3dc items=0 ppid=3041 pid=3106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.245000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Jan 24 12:00:01.259000 audit[3107]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3107 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:00:01.259000 audit[3107]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff19986200 a2=0 a3=7fff199861ec items=0 ppid=3041 pid=3107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.259000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 24 12:00:01.357000 audit[3109]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3109 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:00:01.357000 audit[3109]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcc4d63ac0 a2=0 a3=7ffcc4d63aac items=0 ppid=3041 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.357000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 24 12:00:01.361000 audit[3110]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3110 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:00:01.361000 audit[3110]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffe59ffd40 a2=0 a3=7fffe59ffd2c items=0 ppid=3041 pid=3110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.361000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Jan 24 12:00:01.378000 audit[3112]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3112 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:00:01.378000 audit[3112]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc2555a2e0 a2=0 a3=7ffc2555a2cc items=0 ppid=3041 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.378000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 24 12:00:01.416000 audit[3115]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3115 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:00:01.416000 audit[3115]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd7e7fcea0 a2=0 a3=7ffd7e7fce8c items=0 ppid=3041 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.416000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 24 12:00:01.419000 audit[3116]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3116 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:00:01.419000 audit[3116]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdf6610290 a2=0 a3=7ffdf661027c items=0 ppid=3041 pid=3116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.419000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Jan 24 12:00:01.430000 audit[3118]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3118 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:00:01.430000 audit[3118]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff3d7506f0 a2=0 a3=7fff3d7506dc items=0 ppid=3041 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.430000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 24 12:00:01.436000 audit[3119]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3119 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:00:01.436000 audit[3119]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe5c8367e0 a2=0 a3=7ffe5c8367cc items=0 ppid=3041 pid=3119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.436000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 24 12:00:01.443000 audit[3121]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3121 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:00:01.443000 audit[3121]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe0a2b34f0 a2=0 a3=7ffe0a2b34dc items=0 ppid=3041 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.443000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F5859 Jan 24 12:00:01.453032 kubelet[2876]: E0124 12:00:01.449800 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:00:01.459000 audit[3124]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3124 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:00:01.459000 audit[3124]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd72de1730 a2=0 a3=7ffd72de171c items=0 ppid=3041 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.459000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Jan 24 12:00:01.485000 audit[3127]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3127 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:00:01.485000 audit[3127]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffa65a8370 a2=0 a3=7fffa65a835c items=0 ppid=3041 pid=3127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.485000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Jan 24 12:00:01.489000 audit[3128]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3128 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:00:01.489000 audit[3128]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc692c8420 a2=0 a3=7ffc692c840c items=0 ppid=3041 pid=3128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.489000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Jan 24 12:00:01.493797 kubelet[2876]: I0124 12:00:01.493445 2876 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7ntqx" podStartSLOduration=2.493364428 podStartE2EDuration="2.493364428s" podCreationTimestamp="2026-01-24 11:59:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 12:00:01.49176547 +0000 UTC m=+8.333977650" watchObservedRunningTime="2026-01-24 12:00:01.493364428 +0000 UTC m=+8.335576588" Jan 24 12:00:01.498000 audit[3130]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3130 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:00:01.498000 audit[3130]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffcbb4e1200 a2=0 a3=7ffcbb4e11ec items=0 ppid=3041 pid=3130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.498000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 24 12:00:01.511000 audit[3133]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3133 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:00:01.511000 audit[3133]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff24ff7890 a2=0 a3=7fff24ff787c items=0 ppid=3041 pid=3133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.511000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 24 12:00:01.524000 audit[3134]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3134 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:00:01.524000 audit[3134]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd5536d480 a2=0 a3=7ffd5536d46c items=0 ppid=3041 pid=3134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.524000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 24 12:00:01.532000 audit[3136]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3136 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 12:00:01.532000 audit[3136]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffc8eeca3c0 a2=0 a3=7ffc8eeca3ac items=0 ppid=3041 pid=3136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.532000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 24 12:00:01.586000 audit[3146]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3146 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:00:01.586000 audit[3146]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffea92f1250 a2=0 a3=7ffea92f123c items=0 ppid=3041 pid=3146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.586000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:00:01.602000 audit[3146]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3146 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:00:01.602000 audit[3146]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffea92f1250 a2=0 a3=7ffea92f123c items=0 ppid=3041 pid=3146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.602000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:00:01.609000 audit[3151]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3151 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:00:01.609000 audit[3151]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff348650a0 a2=0 a3=7fff3486508c items=0 ppid=3041 pid=3151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.609000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 24 12:00:01.613373 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3165485318.mount: Deactivated successfully. Jan 24 12:00:01.625000 audit[3153]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3153 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:00:01.625000 audit[3153]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc26659720 a2=0 a3=7ffc2665970c items=0 ppid=3041 pid=3153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.625000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Jan 24 12:00:01.636000 audit[3156]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3156 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:00:01.636000 audit[3156]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd26e54e00 a2=0 a3=7ffd26e54dec items=0 ppid=3041 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.636000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C Jan 24 12:00:01.639000 audit[3157]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3157 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:00:01.639000 audit[3157]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc451a8a70 a2=0 a3=7ffc451a8a5c items=0 ppid=3041 pid=3157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.639000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 24 12:00:01.645000 audit[3159]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3159 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:00:01.645000 audit[3159]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff80bb5d50 a2=0 a3=7fff80bb5d3c items=0 ppid=3041 pid=3159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.645000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 24 12:00:01.647000 audit[3160]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3160 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:00:01.647000 audit[3160]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdda519de0 a2=0 a3=7ffdda519dcc items=0 ppid=3041 pid=3160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.647000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Jan 24 12:00:01.654000 audit[3162]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3162 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:00:01.654000 audit[3162]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc261696b0 a2=0 a3=7ffc2616969c items=0 ppid=3041 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.654000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 24 12:00:01.663000 audit[3165]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3165 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:00:01.663000 audit[3165]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffd5bab2910 a2=0 a3=7ffd5bab28fc items=0 ppid=3041 pid=3165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.663000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 24 12:00:01.666000 audit[3166]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3166 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:00:01.666000 audit[3166]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffedb6c3190 a2=0 a3=7ffedb6c317c items=0 ppid=3041 pid=3166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.666000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Jan 24 12:00:01.673000 audit[3168]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3168 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:00:01.673000 audit[3168]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc1e4c4780 a2=0 a3=7ffc1e4c476c items=0 ppid=3041 pid=3168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.673000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 24 12:00:01.678000 audit[3169]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3169 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:00:01.678000 audit[3169]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe67346890 a2=0 a3=7ffe6734687c items=0 ppid=3041 pid=3169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.678000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 24 12:00:01.686000 audit[3171]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3171 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:00:01.686000 audit[3171]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffb7fc6570 a2=0 a3=7fffb7fc655c items=0 ppid=3041 pid=3171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.686000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Jan 24 12:00:01.700000 audit[3174]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3174 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:00:01.700000 audit[3174]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff873413c0 a2=0 a3=7fff873413ac items=0 ppid=3041 pid=3174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.700000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Jan 24 12:00:01.733000 audit[3177]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3177 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:00:01.733000 audit[3177]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc660ae010 a2=0 a3=7ffc660adffc items=0 ppid=3041 pid=3177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.733000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D5052 Jan 24 12:00:01.736000 audit[3178]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3178 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:00:01.736000 audit[3178]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff0a5ccf60 a2=0 a3=7fff0a5ccf4c items=0 ppid=3041 pid=3178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.736000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Jan 24 12:00:01.742000 audit[3180]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3180 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:00:01.742000 audit[3180]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc4de29350 a2=0 a3=7ffc4de2933c items=0 ppid=3041 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.742000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 24 12:00:01.753000 audit[3183]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3183 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:00:01.753000 audit[3183]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc855fe990 a2=0 a3=7ffc855fe97c items=0 ppid=3041 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.753000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 24 12:00:01.756000 audit[3184]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3184 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:00:01.756000 audit[3184]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffb1ff1970 a2=0 a3=7fffb1ff195c items=0 ppid=3041 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.756000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 24 12:00:01.762000 audit[3186]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3186 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:00:01.762000 audit[3186]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffea3cb8c10 a2=0 a3=7ffea3cb8bfc items=0 ppid=3041 pid=3186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.762000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 24 12:00:01.765000 audit[3187]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3187 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:00:01.765000 audit[3187]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff5bb7cf90 a2=0 a3=7fff5bb7cf7c items=0 ppid=3041 pid=3187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.765000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 24 12:00:01.778000 audit[3189]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3189 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:00:01.778000 audit[3189]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd7b402ad0 a2=0 a3=7ffd7b402abc items=0 ppid=3041 pid=3189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.778000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 24 12:00:01.792000 audit[3192]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3192 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 12:00:01.792000 audit[3192]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe136dabb0 a2=0 a3=7ffe136dab9c items=0 ppid=3041 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.792000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 24 12:00:01.801000 audit[3194]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3194 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 24 12:00:01.801000 audit[3194]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffe8782dc30 a2=0 a3=7ffe8782dc1c items=0 ppid=3041 pid=3194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.801000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:00:01.802000 audit[3194]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3194 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 24 12:00:01.802000 audit[3194]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffe8782dc30 a2=0 a3=7ffe8782dc1c items=0 ppid=3041 pid=3194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:01.802000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:00:01.962161 kubelet[2876]: E0124 12:00:01.961972 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:00:02.498127 kubelet[2876]: E0124 12:00:02.497680 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:00:02.521337 kubelet[2876]: E0124 12:00:02.498506 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:00:03.303835 kubelet[2876]: E0124 12:00:03.297460 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:00:03.506607 kubelet[2876]: E0124 12:00:03.505993 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:00:03.536730 kubelet[2876]: E0124 12:00:03.506100 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:00:06.038072 containerd[1652]: time="2026-01-24T12:00:06.037932204Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 12:00:06.043058 containerd[1652]: time="2026-01-24T12:00:06.042870278Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23560844" Jan 24 12:00:06.047870 containerd[1652]: time="2026-01-24T12:00:06.047768859Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 12:00:06.053724 containerd[1652]: time="2026-01-24T12:00:06.053634967Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 12:00:06.054672 containerd[1652]: time="2026-01-24T12:00:06.054520665Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 5.7209486s" Jan 24 12:00:06.054672 containerd[1652]: time="2026-01-24T12:00:06.054664973Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 24 12:00:06.075243 containerd[1652]: time="2026-01-24T12:00:06.075152664Z" level=info msg="CreateContainer within sandbox \"7f15c9a927714eb6bea2a8ed39926466edc036fd26285e03d78b6951cf8976c7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 24 12:00:06.128365 containerd[1652]: time="2026-01-24T12:00:06.128249915Z" level=info msg="Container 5cad373b527e56fba491fbe7ce0a46f0a83dc8d212c84b17cb4b9d7955ff9c54: CDI devices from CRI Config.CDIDevices: []" Jan 24 12:00:06.172750 containerd[1652]: time="2026-01-24T12:00:06.163342979Z" level=info msg="CreateContainer within sandbox \"7f15c9a927714eb6bea2a8ed39926466edc036fd26285e03d78b6951cf8976c7\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5cad373b527e56fba491fbe7ce0a46f0a83dc8d212c84b17cb4b9d7955ff9c54\"" Jan 24 12:00:06.172750 containerd[1652]: time="2026-01-24T12:00:06.167461565Z" level=info msg="StartContainer for \"5cad373b527e56fba491fbe7ce0a46f0a83dc8d212c84b17cb4b9d7955ff9c54\"" Jan 24 12:00:06.172750 containerd[1652]: time="2026-01-24T12:00:06.169709540Z" level=info msg="connecting to shim 5cad373b527e56fba491fbe7ce0a46f0a83dc8d212c84b17cb4b9d7955ff9c54" address="unix:///run/containerd/s/b5fec9d778ab8dee1e6550100d38019af9488dc3263c7edfd5f35002f7c11c90" protocol=ttrpc version=3 Jan 24 12:00:06.249742 systemd[1]: Started cri-containerd-5cad373b527e56fba491fbe7ce0a46f0a83dc8d212c84b17cb4b9d7955ff9c54.scope - libcontainer container 5cad373b527e56fba491fbe7ce0a46f0a83dc8d212c84b17cb4b9d7955ff9c54. Jan 24 12:00:06.329000 audit: BPF prog-id=144 op=LOAD Jan 24 12:00:06.333892 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 24 12:00:06.334409 kernel: audit: type=1334 audit(1769256006.329:503): prog-id=144 op=LOAD Jan 24 12:00:06.334000 audit: BPF prog-id=145 op=LOAD Jan 24 12:00:06.339376 kernel: audit: type=1334 audit(1769256006.334:504): prog-id=145 op=LOAD Jan 24 12:00:06.334000 audit[3200]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2967 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:06.352065 kernel: audit: type=1300 audit(1769256006.334:504): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2967 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:06.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563616433373362353237653536666261343931666265376365306134 Jan 24 12:00:06.373435 kernel: audit: type=1327 audit(1769256006.334:504): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563616433373362353237653536666261343931666265376365306134 Jan 24 12:00:06.379682 kernel: audit: type=1334 audit(1769256006.334:505): prog-id=145 op=UNLOAD Jan 24 12:00:06.334000 audit: BPF prog-id=145 op=UNLOAD Jan 24 12:00:06.334000 audit[3200]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2967 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:06.401604 kernel: audit: type=1300 audit(1769256006.334:505): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2967 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:06.401681 kernel: audit: type=1327 audit(1769256006.334:505): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563616433373362353237653536666261343931666265376365306134 Jan 24 12:00:06.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563616433373362353237653536666261343931666265376365306134 Jan 24 12:00:06.334000 audit: BPF prog-id=146 op=LOAD Jan 24 12:00:06.424205 kernel: audit: type=1334 audit(1769256006.334:506): prog-id=146 op=LOAD Jan 24 12:00:06.424275 kernel: audit: type=1300 audit(1769256006.334:506): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2967 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:06.334000 audit[3200]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2967 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:06.442849 kernel: audit: type=1327 audit(1769256006.334:506): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563616433373362353237653536666261343931666265376365306134 Jan 24 12:00:06.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563616433373362353237653536666261343931666265376365306134 Jan 24 12:00:06.334000 audit: BPF prog-id=147 op=LOAD Jan 24 12:00:06.334000 audit[3200]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2967 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:06.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563616433373362353237653536666261343931666265376365306134 Jan 24 12:00:06.334000 audit: BPF prog-id=147 op=UNLOAD Jan 24 12:00:06.334000 audit[3200]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2967 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:06.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563616433373362353237653536666261343931666265376365306134 Jan 24 12:00:06.334000 audit: BPF prog-id=146 op=UNLOAD Jan 24 12:00:06.334000 audit[3200]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2967 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:06.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563616433373362353237653536666261343931666265376365306134 Jan 24 12:00:06.334000 audit: BPF prog-id=148 op=LOAD Jan 24 12:00:06.334000 audit[3200]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2967 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:06.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563616433373362353237653536666261343931666265376365306134 Jan 24 12:00:06.490704 containerd[1652]: time="2026-01-24T12:00:06.490302164Z" level=info msg="StartContainer for \"5cad373b527e56fba491fbe7ce0a46f0a83dc8d212c84b17cb4b9d7955ff9c54\" returns successfully" Jan 24 12:00:06.760534 kubelet[2876]: I0124 12:00:06.759415 2876 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-dcbpc" podStartSLOduration=2.03351661 podStartE2EDuration="7.759339717s" podCreationTimestamp="2026-01-24 11:59:59 +0000 UTC" firstStartedPulling="2026-01-24 12:00:00.332442674 +0000 UTC m=+7.174654844" lastFinishedPulling="2026-01-24 12:00:06.058265792 +0000 UTC m=+12.900477951" observedRunningTime="2026-01-24 12:00:06.751270546 +0000 UTC m=+13.593482716" watchObservedRunningTime="2026-01-24 12:00:06.759339717 +0000 UTC m=+13.601551897" Jan 24 12:00:16.236765 sudo[1847]: pam_unix(sudo:session): session closed for user root Jan 24 12:00:16.259785 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 24 12:00:16.259941 kernel: audit: type=1106 audit(1769256016.237:511): pid=1847 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 12:00:16.237000 audit[1847]: USER_END pid=1847 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 12:00:16.237000 audit[1847]: CRED_DISP pid=1847 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 12:00:16.273781 kernel: audit: type=1104 audit(1769256016.237:512): pid=1847 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 12:00:16.278639 sshd[1846]: Connection closed by 10.0.0.1 port 40352 Jan 24 12:00:16.279619 sshd-session[1842]: pam_unix(sshd:session): session closed for user core Jan 24 12:00:16.287000 audit[1842]: USER_END pid=1842 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:00:16.328610 kernel: audit: type=1106 audit(1769256016.287:513): pid=1842 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:00:16.325507 systemd[1]: sshd@6-10.0.0.100:22-10.0.0.1:40352.service: Deactivated successfully. Jan 24 12:00:16.333130 systemd[1]: session-8.scope: Deactivated successfully. Jan 24 12:00:16.338224 systemd[1]: session-8.scope: Consumed 17.265s CPU time, 223.1M memory peak. Jan 24 12:00:16.344228 systemd-logind[1619]: Session 8 logged out. Waiting for processes to exit. Jan 24 12:00:16.288000 audit[1842]: CRED_DISP pid=1842 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:00:16.363146 systemd-logind[1619]: Removed session 8. Jan 24 12:00:16.366741 kernel: audit: type=1104 audit(1769256016.288:514): pid=1842 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:00:16.325000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.100:22-10.0.0.1:40352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:00:16.382669 kernel: audit: type=1131 audit(1769256016.325:515): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.100:22-10.0.0.1:40352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:00:17.155000 audit[3291]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3291 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:00:17.166857 kernel: audit: type=1325 audit(1769256017.155:516): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3291 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:00:17.155000 audit[3291]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcfc5ca770 a2=0 a3=7ffcfc5ca75c items=0 ppid=3041 pid=3291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:17.190913 kernel: audit: type=1300 audit(1769256017.155:516): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcfc5ca770 a2=0 a3=7ffcfc5ca75c items=0 ppid=3041 pid=3291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:17.203664 kernel: audit: type=1327 audit(1769256017.155:516): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:00:17.155000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:00:17.173000 audit[3291]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3291 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:00:17.173000 audit[3291]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcfc5ca770 a2=0 a3=0 items=0 ppid=3041 pid=3291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:17.237894 kernel: audit: type=1325 audit(1769256017.173:517): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3291 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:00:17.238131 kernel: audit: type=1300 audit(1769256017.173:517): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcfc5ca770 a2=0 a3=0 items=0 ppid=3041 pid=3291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:17.173000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:00:18.230000 audit[3293]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3293 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:00:18.230000 audit[3293]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe0f15b010 a2=0 a3=7ffe0f15affc items=0 ppid=3041 pid=3293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:18.230000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:00:18.236000 audit[3293]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3293 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:00:18.236000 audit[3293]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe0f15b010 a2=0 a3=0 items=0 ppid=3041 pid=3293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:18.236000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:00:21.630261 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 24 12:00:21.630425 kernel: audit: type=1325 audit(1769256021.618:520): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3295 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:00:21.618000 audit[3295]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3295 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:00:21.618000 audit[3295]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff47105620 a2=0 a3=7fff4710560c items=0 ppid=3041 pid=3295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:21.688244 kernel: audit: type=1300 audit(1769256021.618:520): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff47105620 a2=0 a3=7fff4710560c items=0 ppid=3041 pid=3295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:21.733148 kernel: audit: type=1327 audit(1769256021.618:520): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:00:21.618000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:00:21.904000 audit[3295]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3295 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:00:21.913763 kernel: audit: type=1325 audit(1769256021.904:521): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3295 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:00:21.937619 kernel: audit: type=1300 audit(1769256021.904:521): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff47105620 a2=0 a3=0 items=0 ppid=3041 pid=3295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:21.904000 audit[3295]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff47105620 a2=0 a3=0 items=0 ppid=3041 pid=3295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:21.904000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:00:21.950655 kernel: audit: type=1327 audit(1769256021.904:521): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:00:22.953000 audit[3297]: NETFILTER_CFG table=filter:111 family=2 entries=19 op=nft_register_rule pid=3297 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:00:22.988611 kernel: audit: type=1325 audit(1769256022.953:522): table=filter:111 family=2 entries=19 op=nft_register_rule pid=3297 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:00:22.988767 kernel: audit: type=1300 audit(1769256022.953:522): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffedea58940 a2=0 a3=7ffedea5892c items=0 ppid=3041 pid=3297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:22.953000 audit[3297]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffedea58940 a2=0 a3=7ffedea5892c items=0 ppid=3041 pid=3297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:22.953000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:00:23.004617 kernel: audit: type=1327 audit(1769256022.953:522): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:00:22.990000 audit[3297]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3297 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:00:22.990000 audit[3297]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffedea58940 a2=0 a3=0 items=0 ppid=3041 pid=3297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:22.990000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:00:23.041605 kernel: audit: type=1325 audit(1769256022.990:523): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3297 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:00:25.145000 audit[3299]: NETFILTER_CFG table=filter:113 family=2 entries=21 op=nft_register_rule pid=3299 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:00:25.145000 audit[3299]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fffa377aad0 a2=0 a3=7fffa377aabc items=0 ppid=3041 pid=3299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:25.145000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:00:25.152000 audit[3299]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3299 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:00:25.152000 audit[3299]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffa377aad0 a2=0 a3=0 items=0 ppid=3041 pid=3299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:25.152000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:00:25.281166 systemd[1]: Created slice kubepods-besteffort-pod0e528f53_c3d2_47d5_a7f4_d5145b401833.slice - libcontainer container kubepods-besteffort-pod0e528f53_c3d2_47d5_a7f4_d5145b401833.slice. Jan 24 12:00:25.451420 kubelet[2876]: I0124 12:00:25.450477 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e528f53-c3d2-47d5-a7f4-d5145b401833-tigera-ca-bundle\") pod \"calico-typha-5678d64998-jsnfn\" (UID: \"0e528f53-c3d2-47d5-a7f4-d5145b401833\") " pod="calico-system/calico-typha-5678d64998-jsnfn" Jan 24 12:00:25.451420 kubelet[2876]: I0124 12:00:25.450658 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0e528f53-c3d2-47d5-a7f4-d5145b401833-typha-certs\") pod \"calico-typha-5678d64998-jsnfn\" (UID: \"0e528f53-c3d2-47d5-a7f4-d5145b401833\") " pod="calico-system/calico-typha-5678d64998-jsnfn" Jan 24 12:00:25.451420 kubelet[2876]: I0124 12:00:25.450695 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9vjz\" (UniqueName: \"kubernetes.io/projected/0e528f53-c3d2-47d5-a7f4-d5145b401833-kube-api-access-n9vjz\") pod \"calico-typha-5678d64998-jsnfn\" (UID: \"0e528f53-c3d2-47d5-a7f4-d5145b401833\") " pod="calico-system/calico-typha-5678d64998-jsnfn" Jan 24 12:00:25.481657 systemd[1]: Created slice kubepods-besteffort-pod0ec836bd_0809_470c_a3c7_0102f318e7c0.slice - libcontainer container kubepods-besteffort-pod0ec836bd_0809_470c_a3c7_0102f318e7c0.slice. Jan 24 12:00:25.551866 kubelet[2876]: I0124 12:00:25.551467 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0ec836bd-0809-470c-a3c7-0102f318e7c0-var-run-calico\") pod \"calico-node-vhvcr\" (UID: \"0ec836bd-0809-470c-a3c7-0102f318e7c0\") " pod="calico-system/calico-node-vhvcr" Jan 24 12:00:25.551866 kubelet[2876]: I0124 12:00:25.551738 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25tnw\" (UniqueName: \"kubernetes.io/projected/0ec836bd-0809-470c-a3c7-0102f318e7c0-kube-api-access-25tnw\") pod \"calico-node-vhvcr\" (UID: \"0ec836bd-0809-470c-a3c7-0102f318e7c0\") " pod="calico-system/calico-node-vhvcr" Jan 24 12:00:25.551866 kubelet[2876]: I0124 12:00:25.551794 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0ec836bd-0809-470c-a3c7-0102f318e7c0-flexvol-driver-host\") pod \"calico-node-vhvcr\" (UID: \"0ec836bd-0809-470c-a3c7-0102f318e7c0\") " pod="calico-system/calico-node-vhvcr" Jan 24 12:00:25.551866 kubelet[2876]: I0124 12:00:25.551826 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0ec836bd-0809-470c-a3c7-0102f318e7c0-var-lib-calico\") pod \"calico-node-vhvcr\" (UID: \"0ec836bd-0809-470c-a3c7-0102f318e7c0\") " pod="calico-system/calico-node-vhvcr" Jan 24 12:00:25.552126 kubelet[2876]: I0124 12:00:25.551886 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0ec836bd-0809-470c-a3c7-0102f318e7c0-lib-modules\") pod \"calico-node-vhvcr\" (UID: \"0ec836bd-0809-470c-a3c7-0102f318e7c0\") " pod="calico-system/calico-node-vhvcr" Jan 24 12:00:25.552126 kubelet[2876]: I0124 12:00:25.551915 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0ec836bd-0809-470c-a3c7-0102f318e7c0-policysync\") pod \"calico-node-vhvcr\" (UID: \"0ec836bd-0809-470c-a3c7-0102f318e7c0\") " pod="calico-system/calico-node-vhvcr" Jan 24 12:00:25.552126 kubelet[2876]: I0124 12:00:25.551941 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0ec836bd-0809-470c-a3c7-0102f318e7c0-cni-bin-dir\") pod \"calico-node-vhvcr\" (UID: \"0ec836bd-0809-470c-a3c7-0102f318e7c0\") " pod="calico-system/calico-node-vhvcr" Jan 24 12:00:25.552126 kubelet[2876]: I0124 12:00:25.551971 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0ec836bd-0809-470c-a3c7-0102f318e7c0-cni-net-dir\") pod \"calico-node-vhvcr\" (UID: \"0ec836bd-0809-470c-a3c7-0102f318e7c0\") " pod="calico-system/calico-node-vhvcr" Jan 24 12:00:25.552126 kubelet[2876]: I0124 12:00:25.551994 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0ec836bd-0809-470c-a3c7-0102f318e7c0-node-certs\") pod \"calico-node-vhvcr\" (UID: \"0ec836bd-0809-470c-a3c7-0102f318e7c0\") " pod="calico-system/calico-node-vhvcr" Jan 24 12:00:25.552456 kubelet[2876]: I0124 12:00:25.552017 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0ec836bd-0809-470c-a3c7-0102f318e7c0-xtables-lock\") pod \"calico-node-vhvcr\" (UID: \"0ec836bd-0809-470c-a3c7-0102f318e7c0\") " pod="calico-system/calico-node-vhvcr" Jan 24 12:00:25.552456 kubelet[2876]: I0124 12:00:25.552043 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0ec836bd-0809-470c-a3c7-0102f318e7c0-cni-log-dir\") pod \"calico-node-vhvcr\" (UID: \"0ec836bd-0809-470c-a3c7-0102f318e7c0\") " pod="calico-system/calico-node-vhvcr" Jan 24 12:00:25.552456 kubelet[2876]: I0124 12:00:25.552084 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ec836bd-0809-470c-a3c7-0102f318e7c0-tigera-ca-bundle\") pod \"calico-node-vhvcr\" (UID: \"0ec836bd-0809-470c-a3c7-0102f318e7c0\") " pod="calico-system/calico-node-vhvcr" Jan 24 12:00:25.691172 kubelet[2876]: E0124 12:00:25.690355 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:00:25.693380 containerd[1652]: time="2026-01-24T12:00:25.693271032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5678d64998-jsnfn,Uid:0e528f53-c3d2-47d5-a7f4-d5145b401833,Namespace:calico-system,Attempt:0,}" Jan 24 12:00:25.733300 kubelet[2876]: E0124 12:00:25.732878 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:25.733300 kubelet[2876]: W0124 12:00:25.733007 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:25.733300 kubelet[2876]: E0124 12:00:25.733127 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:25.748042 kubelet[2876]: E0124 12:00:25.747697 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:25.748307 kubelet[2876]: W0124 12:00:25.748185 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:25.749478 kubelet[2876]: E0124 12:00:25.748509 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:25.788366 kubelet[2876]: E0124 12:00:25.787884 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-576w8" podUID="3477849f-ef62-42dc-be46-c8edc5b93ccb" Jan 24 12:00:25.790689 kubelet[2876]: E0124 12:00:25.790623 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:00:25.793747 kubelet[2876]: E0124 12:00:25.793667 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:25.793747 kubelet[2876]: W0124 12:00:25.793693 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:25.793747 kubelet[2876]: E0124 12:00:25.793739 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:25.795322 kubelet[2876]: E0124 12:00:25.795218 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:25.795381 kubelet[2876]: W0124 12:00:25.795326 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:25.795381 kubelet[2876]: E0124 12:00:25.795349 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:25.798989 kubelet[2876]: E0124 12:00:25.798902 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:25.798989 kubelet[2876]: W0124 12:00:25.798929 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:25.798989 kubelet[2876]: E0124 12:00:25.798948 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:25.799323 containerd[1652]: time="2026-01-24T12:00:25.799289050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vhvcr,Uid:0ec836bd-0809-470c-a3c7-0102f318e7c0,Namespace:calico-system,Attempt:0,}" Jan 24 12:00:25.800878 kubelet[2876]: E0124 12:00:25.800850 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:25.801412 kubelet[2876]: W0124 12:00:25.800871 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:25.801465 kubelet[2876]: E0124 12:00:25.801420 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:25.802091 kubelet[2876]: E0124 12:00:25.802008 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:25.802091 kubelet[2876]: W0124 12:00:25.802044 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:25.802091 kubelet[2876]: E0124 12:00:25.802056 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:25.802693 kubelet[2876]: E0124 12:00:25.802675 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:25.802693 kubelet[2876]: W0124 12:00:25.802692 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:25.802773 kubelet[2876]: E0124 12:00:25.802709 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:25.803613 kubelet[2876]: E0124 12:00:25.803527 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:25.803613 kubelet[2876]: W0124 12:00:25.803609 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:25.803711 kubelet[2876]: E0124 12:00:25.803621 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:25.804678 kubelet[2876]: E0124 12:00:25.804637 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:25.804678 kubelet[2876]: W0124 12:00:25.804676 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:25.804678 kubelet[2876]: E0124 12:00:25.804693 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:25.805718 kubelet[2876]: E0124 12:00:25.805691 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:25.805718 kubelet[2876]: W0124 12:00:25.805713 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:25.805960 kubelet[2876]: E0124 12:00:25.805728 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:25.806656 kubelet[2876]: I0124 12:00:25.806608 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3477849f-ef62-42dc-be46-c8edc5b93ccb-registration-dir\") pod \"csi-node-driver-576w8\" (UID: \"3477849f-ef62-42dc-be46-c8edc5b93ccb\") " pod="calico-system/csi-node-driver-576w8" Jan 24 12:00:25.806983 kubelet[2876]: E0124 12:00:25.806958 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:25.806983 kubelet[2876]: W0124 12:00:25.806974 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:25.806983 kubelet[2876]: E0124 12:00:25.806986 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:25.812959 kubelet[2876]: E0124 12:00:25.808123 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:25.812959 kubelet[2876]: W0124 12:00:25.808140 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:25.812959 kubelet[2876]: E0124 12:00:25.808152 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:25.812959 kubelet[2876]: E0124 12:00:25.811288 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:25.812959 kubelet[2876]: W0124 12:00:25.811304 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:25.812959 kubelet[2876]: E0124 12:00:25.811317 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:25.816424 containerd[1652]: time="2026-01-24T12:00:25.815076513Z" level=info msg="connecting to shim ef212cfb2f43d6cc9282cc2004a9b6991155b45c69a2d8d706c269e46b04fad6" address="unix:///run/containerd/s/b0a1358c8525b6ec90b8671d4090efd64944609ec8e72a0b013a6222020c8f8d" namespace=k8s.io protocol=ttrpc version=3 Jan 24 12:00:25.817100 kubelet[2876]: E0124 12:00:25.817080 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:25.817984 kubelet[2876]: W0124 12:00:25.817472 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:25.817984 kubelet[2876]: E0124 12:00:25.817495 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:25.819898 kubelet[2876]: I0124 12:00:25.818197 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3477849f-ef62-42dc-be46-c8edc5b93ccb-kubelet-dir\") pod \"csi-node-driver-576w8\" (UID: \"3477849f-ef62-42dc-be46-c8edc5b93ccb\") " pod="calico-system/csi-node-driver-576w8" Jan 24 12:00:25.822974 kubelet[2876]: E0124 12:00:25.822526 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:25.822974 kubelet[2876]: W0124 12:00:25.822843 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:25.822974 kubelet[2876]: E0124 12:00:25.822857 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:25.823415 kubelet[2876]: E0124 12:00:25.823399 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:25.823503 kubelet[2876]: W0124 12:00:25.823485 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:25.825302 kubelet[2876]: E0124 12:00:25.824069 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:25.829294 kubelet[2876]: E0124 12:00:25.829120 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:25.829294 kubelet[2876]: W0124 12:00:25.829137 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:25.829294 kubelet[2876]: E0124 12:00:25.829151 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:25.835070 kubelet[2876]: E0124 12:00:25.834463 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:25.835070 kubelet[2876]: W0124 12:00:25.834481 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:25.835070 kubelet[2876]: E0124 12:00:25.834495 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:25.835967 kubelet[2876]: E0124 12:00:25.835320 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:25.835967 kubelet[2876]: W0124 12:00:25.835337 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:25.835967 kubelet[2876]: E0124 12:00:25.835352 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:25.838092 kubelet[2876]: E0124 12:00:25.838074 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:25.838175 kubelet[2876]: W0124 12:00:25.838158 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:25.838273 kubelet[2876]: E0124 12:00:25.838253 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:25.845836 kubelet[2876]: E0124 12:00:25.845815 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:25.845953 kubelet[2876]: W0124 12:00:25.845934 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:25.846073 kubelet[2876]: E0124 12:00:25.846051 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:25.856126 kubelet[2876]: E0124 12:00:25.851883 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:25.856126 kubelet[2876]: W0124 12:00:25.851902 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:25.856126 kubelet[2876]: E0124 12:00:25.851917 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:25.856335 kubelet[2876]: E0124 12:00:25.856320 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:25.861165 kubelet[2876]: W0124 12:00:25.861142 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:25.863934 kubelet[2876]: E0124 12:00:25.861261 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:25.871495 kubelet[2876]: E0124 12:00:25.871475 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:25.873261 kubelet[2876]: W0124 12:00:25.873237 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:25.873353 kubelet[2876]: E0124 12:00:25.873338 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:25.880897 kubelet[2876]: E0124 12:00:25.880724 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:25.881224 kubelet[2876]: W0124 12:00:25.881039 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:25.881458 kubelet[2876]: E0124 12:00:25.881386 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:25.882676 kubelet[2876]: E0124 12:00:25.882399 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:25.883072 kubelet[2876]: W0124 12:00:25.883050 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:25.883312 kubelet[2876]: E0124 12:00:25.883198 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:25.884679 kubelet[2876]: E0124 12:00:25.884662 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:25.884811 kubelet[2876]: W0124 12:00:25.884790 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:25.885004 kubelet[2876]: E0124 12:00:25.884982 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:25.943624 kubelet[2876]: E0124 12:00:25.941469 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:25.943624 kubelet[2876]: W0124 12:00:25.943611 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:25.943848 kubelet[2876]: E0124 12:00:25.943665 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:25.948034 containerd[1652]: time="2026-01-24T12:00:25.947858494Z" level=info msg="connecting to shim 159464e4aab2677242792e6aa7054bddd0c5ac366b3f20a84fb9f8fd5fcc4df2" address="unix:///run/containerd/s/2b563622b22c36b584dce485db145d220d76e670e91f6039b381126b7920b7cb" namespace=k8s.io protocol=ttrpc version=3 Jan 24 12:00:25.948379 kubelet[2876]: E0124 12:00:25.948292 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:25.948720 kubelet[2876]: W0124 12:00:25.948666 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:25.949437 kubelet[2876]: E0124 12:00:25.949303 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:25.956669 kubelet[2876]: E0124 12:00:25.954281 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:25.956669 kubelet[2876]: W0124 12:00:25.955798 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:25.957246 kubelet[2876]: E0124 12:00:25.957179 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:25.960284 kubelet[2876]: E0124 12:00:25.960205 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:25.960393 kubelet[2876]: W0124 12:00:25.960255 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:25.960393 kubelet[2876]: E0124 12:00:25.960371 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:25.961240 kubelet[2876]: E0124 12:00:25.961129 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:25.961240 kubelet[2876]: W0124 12:00:25.961176 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:25.961240 kubelet[2876]: E0124 12:00:25.961197 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:25.962385 kubelet[2876]: E0124 12:00:25.962314 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:25.962385 kubelet[2876]: W0124 12:00:25.962365 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:25.962385 kubelet[2876]: E0124 12:00:25.962387 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:25.962759 kubelet[2876]: I0124 12:00:25.962651 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3477849f-ef62-42dc-be46-c8edc5b93ccb-varrun\") pod \"csi-node-driver-576w8\" (UID: \"3477849f-ef62-42dc-be46-c8edc5b93ccb\") " pod="calico-system/csi-node-driver-576w8" Jan 24 12:00:25.963396 kubelet[2876]: E0124 12:00:25.963334 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:25.963396 kubelet[2876]: W0124 12:00:25.963382 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:25.963396 kubelet[2876]: E0124 12:00:25.963398 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:25.964244 kubelet[2876]: E0124 12:00:25.964200 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:25.964244 kubelet[2876]: W0124 12:00:25.964243 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:25.964341 kubelet[2876]: E0124 12:00:25.964261 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:25.966321 kubelet[2876]: E0124 12:00:25.965143 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:25.966321 kubelet[2876]: W0124 12:00:25.965167 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:25.966321 kubelet[2876]: E0124 12:00:25.965182 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:25.966321 kubelet[2876]: I0124 12:00:25.965653 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6frb\" (UniqueName: \"kubernetes.io/projected/3477849f-ef62-42dc-be46-c8edc5b93ccb-kube-api-access-g6frb\") pod \"csi-node-driver-576w8\" (UID: \"3477849f-ef62-42dc-be46-c8edc5b93ccb\") " pod="calico-system/csi-node-driver-576w8" Jan 24 12:00:25.966321 kubelet[2876]: E0124 12:00:25.966139 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:25.966321 kubelet[2876]: W0124 12:00:25.966152 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:25.966321 kubelet[2876]: E0124 12:00:25.966169 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:25.967149 kubelet[2876]: E0124 12:00:25.967079 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:25.967149 kubelet[2876]: W0124 12:00:25.967124 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:25.967149 kubelet[2876]: E0124 12:00:25.967138 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:25.970415 kubelet[2876]: E0124 12:00:25.968674 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:25.970415 kubelet[2876]: W0124 12:00:25.968728 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:25.970415 kubelet[2876]: E0124 12:00:25.968748 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:25.971002 kubelet[2876]: E0124 12:00:25.970933 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:25.971002 kubelet[2876]: W0124 12:00:25.970981 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:25.971002 kubelet[2876]: E0124 12:00:25.971000 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:25.971206 kubelet[2876]: I0124 12:00:25.971124 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3477849f-ef62-42dc-be46-c8edc5b93ccb-socket-dir\") pod \"csi-node-driver-576w8\" (UID: \"3477849f-ef62-42dc-be46-c8edc5b93ccb\") " pod="calico-system/csi-node-driver-576w8" Jan 24 12:00:25.972968 kubelet[2876]: E0124 12:00:25.972180 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:25.972968 kubelet[2876]: W0124 12:00:25.972226 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:25.972968 kubelet[2876]: E0124 12:00:25.972247 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:25.974883 kubelet[2876]: E0124 12:00:25.974633 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:25.974883 kubelet[2876]: W0124 12:00:25.974679 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:25.974883 kubelet[2876]: E0124 12:00:25.974697 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:25.977179 kubelet[2876]: E0124 12:00:25.976886 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:25.977179 kubelet[2876]: W0124 12:00:25.977017 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:25.977179 kubelet[2876]: E0124 12:00:25.977110 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:25.980026 kubelet[2876]: E0124 12:00:25.979664 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:25.980432 kubelet[2876]: W0124 12:00:25.979965 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:25.980432 kubelet[2876]: E0124 12:00:25.980068 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:25.982811 kubelet[2876]: E0124 12:00:25.982712 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:25.984717 kubelet[2876]: W0124 12:00:25.983269 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:25.984717 kubelet[2876]: E0124 12:00:25.983330 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:25.987643 kubelet[2876]: E0124 12:00:25.985941 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:25.987643 kubelet[2876]: W0124 12:00:25.985960 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:25.987643 kubelet[2876]: E0124 12:00:25.985982 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:26.000818 systemd[1]: Started cri-containerd-ef212cfb2f43d6cc9282cc2004a9b6991155b45c69a2d8d706c269e46b04fad6.scope - libcontainer container ef212cfb2f43d6cc9282cc2004a9b6991155b45c69a2d8d706c269e46b04fad6. Jan 24 12:00:26.024075 systemd[1]: Started cri-containerd-159464e4aab2677242792e6aa7054bddd0c5ac366b3f20a84fb9f8fd5fcc4df2.scope - libcontainer container 159464e4aab2677242792e6aa7054bddd0c5ac366b3f20a84fb9f8fd5fcc4df2. Jan 24 12:00:26.045000 audit: BPF prog-id=149 op=LOAD Jan 24 12:00:26.047000 audit: BPF prog-id=150 op=LOAD Jan 24 12:00:26.047000 audit[3361]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3323 pid=3361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:26.047000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566323132636662326634336436636339323832636332303034613962 Jan 24 12:00:26.047000 audit: BPF prog-id=150 op=UNLOAD Jan 24 12:00:26.047000 audit[3361]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3323 pid=3361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:26.047000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566323132636662326634336436636339323832636332303034613962 Jan 24 12:00:26.048000 audit: BPF prog-id=151 op=LOAD Jan 24 12:00:26.048000 audit[3361]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3323 pid=3361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:26.048000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566323132636662326634336436636339323832636332303034613962 Jan 24 12:00:26.049000 audit: BPF prog-id=152 op=LOAD Jan 24 12:00:26.049000 audit[3361]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3323 pid=3361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:26.049000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566323132636662326634336436636339323832636332303034613962 Jan 24 12:00:26.049000 audit: BPF prog-id=152 op=UNLOAD Jan 24 12:00:26.049000 audit[3361]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3323 pid=3361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:26.049000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566323132636662326634336436636339323832636332303034613962 Jan 24 12:00:26.049000 audit: BPF prog-id=151 op=UNLOAD Jan 24 12:00:26.049000 audit[3361]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3323 pid=3361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:26.049000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566323132636662326634336436636339323832636332303034613962 Jan 24 12:00:26.049000 audit: BPF prog-id=153 op=LOAD Jan 24 12:00:26.049000 audit[3361]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3323 pid=3361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:26.049000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566323132636662326634336436636339323832636332303034613962 Jan 24 12:00:26.063000 audit: BPF prog-id=154 op=LOAD Jan 24 12:00:26.064000 audit: BPF prog-id=155 op=LOAD Jan 24 12:00:26.064000 audit[3399]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000206238 a2=98 a3=0 items=0 ppid=3376 pid=3399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:26.064000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135393436346534616162323637373234323739326536616137303534 Jan 24 12:00:26.064000 audit: BPF prog-id=155 op=UNLOAD Jan 24 12:00:26.064000 audit[3399]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3376 pid=3399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:26.064000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135393436346534616162323637373234323739326536616137303534 Jan 24 12:00:26.064000 audit: BPF prog-id=156 op=LOAD Jan 24 12:00:26.064000 audit[3399]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000206488 a2=98 a3=0 items=0 ppid=3376 pid=3399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:26.064000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135393436346534616162323637373234323739326536616137303534 Jan 24 12:00:26.064000 audit: BPF prog-id=157 op=LOAD Jan 24 12:00:26.064000 audit[3399]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000206218 a2=98 a3=0 items=0 ppid=3376 pid=3399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:26.064000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135393436346534616162323637373234323739326536616137303534 Jan 24 12:00:26.065000 audit: BPF prog-id=157 op=UNLOAD Jan 24 12:00:26.065000 audit[3399]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3376 pid=3399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:26.065000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135393436346534616162323637373234323739326536616137303534 Jan 24 12:00:26.065000 audit: BPF prog-id=156 op=UNLOAD Jan 24 12:00:26.065000 audit[3399]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3376 pid=3399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:26.065000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135393436346534616162323637373234323739326536616137303534 Jan 24 12:00:26.065000 audit: BPF prog-id=158 op=LOAD Jan 24 12:00:26.065000 audit[3399]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002066e8 a2=98 a3=0 items=0 ppid=3376 pid=3399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:26.065000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135393436346534616162323637373234323739326536616137303534 Jan 24 12:00:26.080283 kubelet[2876]: E0124 12:00:26.079945 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:26.080448 kubelet[2876]: W0124 12:00:26.080248 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:26.080448 kubelet[2876]: E0124 12:00:26.080375 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:26.086723 kubelet[2876]: E0124 12:00:26.084954 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:26.086723 kubelet[2876]: W0124 12:00:26.084975 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:26.086723 kubelet[2876]: E0124 12:00:26.084999 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:26.086723 kubelet[2876]: E0124 12:00:26.086071 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:26.086723 kubelet[2876]: W0124 12:00:26.086087 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:26.086723 kubelet[2876]: E0124 12:00:26.086105 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:26.090833 kubelet[2876]: E0124 12:00:26.090796 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:26.090833 kubelet[2876]: W0124 12:00:26.090817 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:26.090946 kubelet[2876]: E0124 12:00:26.090840 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:26.095659 kubelet[2876]: E0124 12:00:26.094690 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:26.095659 kubelet[2876]: W0124 12:00:26.094730 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:26.095659 kubelet[2876]: E0124 12:00:26.094752 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:26.095659 kubelet[2876]: E0124 12:00:26.095203 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:26.095659 kubelet[2876]: W0124 12:00:26.095217 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:26.095659 kubelet[2876]: E0124 12:00:26.095232 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:26.095903 kubelet[2876]: E0124 12:00:26.095851 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:26.095903 kubelet[2876]: W0124 12:00:26.095864 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:26.095903 kubelet[2876]: E0124 12:00:26.095880 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:26.096726 kubelet[2876]: E0124 12:00:26.096659 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:26.096792 kubelet[2876]: W0124 12:00:26.096759 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:26.096792 kubelet[2876]: E0124 12:00:26.096779 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:26.098977 kubelet[2876]: E0124 12:00:26.098907 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:26.098977 kubelet[2876]: W0124 12:00:26.098954 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:26.098977 kubelet[2876]: E0124 12:00:26.098975 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:26.101904 kubelet[2876]: E0124 12:00:26.100639 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:26.101904 kubelet[2876]: W0124 12:00:26.100657 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:26.101904 kubelet[2876]: E0124 12:00:26.100672 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:26.108464 kubelet[2876]: E0124 12:00:26.106706 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:26.108464 kubelet[2876]: W0124 12:00:26.106736 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:26.108464 kubelet[2876]: E0124 12:00:26.106757 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:26.115732 kubelet[2876]: E0124 12:00:26.109760 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:26.115732 kubelet[2876]: W0124 12:00:26.109874 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:26.115732 kubelet[2876]: E0124 12:00:26.109893 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:26.115732 kubelet[2876]: E0124 12:00:26.112654 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:26.115732 kubelet[2876]: W0124 12:00:26.112669 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:26.115732 kubelet[2876]: E0124 12:00:26.112685 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:26.117754 kubelet[2876]: E0124 12:00:26.117291 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:26.117754 kubelet[2876]: W0124 12:00:26.117414 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:26.117754 kubelet[2876]: E0124 12:00:26.117434 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:26.120461 kubelet[2876]: E0124 12:00:26.120089 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:26.120461 kubelet[2876]: W0124 12:00:26.120109 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:26.120461 kubelet[2876]: E0124 12:00:26.120134 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:26.143998 kubelet[2876]: E0124 12:00:26.143964 2876 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 12:00:26.144239 kubelet[2876]: W0124 12:00:26.144218 2876 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 12:00:26.144329 kubelet[2876]: E0124 12:00:26.144312 2876 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 12:00:26.144982 containerd[1652]: time="2026-01-24T12:00:26.144861797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vhvcr,Uid:0ec836bd-0809-470c-a3c7-0102f318e7c0,Namespace:calico-system,Attempt:0,} returns sandbox id \"159464e4aab2677242792e6aa7054bddd0c5ac366b3f20a84fb9f8fd5fcc4df2\"" Jan 24 12:00:26.149721 kubelet[2876]: E0124 12:00:26.149690 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:00:26.155654 containerd[1652]: time="2026-01-24T12:00:26.154912368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 24 12:00:26.181974 containerd[1652]: time="2026-01-24T12:00:26.181884017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5678d64998-jsnfn,Uid:0e528f53-c3d2-47d5-a7f4-d5145b401833,Namespace:calico-system,Attempt:0,} returns sandbox id \"ef212cfb2f43d6cc9282cc2004a9b6991155b45c69a2d8d706c269e46b04fad6\"" Jan 24 12:00:26.189083 kubelet[2876]: E0124 12:00:26.187811 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:00:26.192000 audit[3470]: NETFILTER_CFG table=filter:115 family=2 entries=22 op=nft_register_rule pid=3470 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:00:26.192000 audit[3470]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffdacef0940 a2=0 a3=7ffdacef092c items=0 ppid=3041 pid=3470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:26.192000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:00:26.200000 audit[3470]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3470 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:00:26.200000 audit[3470]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdacef0940 a2=0 a3=0 items=0 ppid=3041 pid=3470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:26.200000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:00:26.892894 kubelet[2876]: E0124 12:00:26.891750 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-576w8" podUID="3477849f-ef62-42dc-be46-c8edc5b93ccb" Jan 24 12:00:27.000883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2312512535.mount: Deactivated successfully. Jan 24 12:00:27.356349 containerd[1652]: time="2026-01-24T12:00:27.356089151Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 12:00:27.358237 containerd[1652]: time="2026-01-24T12:00:27.358134185Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 24 12:00:27.360163 containerd[1652]: time="2026-01-24T12:00:27.360086265Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 12:00:27.363254 containerd[1652]: time="2026-01-24T12:00:27.362912762Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 12:00:27.364795 containerd[1652]: time="2026-01-24T12:00:27.364694149Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.208401861s" Jan 24 12:00:27.364912 containerd[1652]: time="2026-01-24T12:00:27.364882072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 24 12:00:27.368336 containerd[1652]: time="2026-01-24T12:00:27.367964680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 24 12:00:27.381400 containerd[1652]: time="2026-01-24T12:00:27.381195984Z" level=info msg="CreateContainer within sandbox \"159464e4aab2677242792e6aa7054bddd0c5ac366b3f20a84fb9f8fd5fcc4df2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 24 12:00:27.431649 containerd[1652]: time="2026-01-24T12:00:27.430744637Z" level=info msg="Container b1ec6765605cc09e920a3a4cfa57e11318a869f4ce2106065bf14b41d2ce631d: CDI devices from CRI Config.CDIDevices: []" Jan 24 12:00:27.435126 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1443077992.mount: Deactivated successfully. Jan 24 12:00:27.450444 containerd[1652]: time="2026-01-24T12:00:27.450357761Z" level=info msg="CreateContainer within sandbox \"159464e4aab2677242792e6aa7054bddd0c5ac366b3f20a84fb9f8fd5fcc4df2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b1ec6765605cc09e920a3a4cfa57e11318a869f4ce2106065bf14b41d2ce631d\"" Jan 24 12:00:27.451435 containerd[1652]: time="2026-01-24T12:00:27.451364383Z" level=info msg="StartContainer for \"b1ec6765605cc09e920a3a4cfa57e11318a869f4ce2106065bf14b41d2ce631d\"" Jan 24 12:00:27.454200 containerd[1652]: time="2026-01-24T12:00:27.454161696Z" level=info msg="connecting to shim b1ec6765605cc09e920a3a4cfa57e11318a869f4ce2106065bf14b41d2ce631d" address="unix:///run/containerd/s/2b563622b22c36b584dce485db145d220d76e670e91f6039b381126b7920b7cb" protocol=ttrpc version=3 Jan 24 12:00:27.531028 systemd[1]: Started cri-containerd-b1ec6765605cc09e920a3a4cfa57e11318a869f4ce2106065bf14b41d2ce631d.scope - libcontainer container b1ec6765605cc09e920a3a4cfa57e11318a869f4ce2106065bf14b41d2ce631d. Jan 24 12:00:27.683863 kernel: kauditd_printk_skb: 58 callbacks suppressed Jan 24 12:00:27.684696 kernel: audit: type=1334 audit(1769256027.674:544): prog-id=159 op=LOAD Jan 24 12:00:27.674000 audit: BPF prog-id=159 op=LOAD Jan 24 12:00:27.674000 audit[3479]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3376 pid=3479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:27.699200 kernel: audit: type=1300 audit(1769256027.674:544): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3376 pid=3479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:27.699296 kernel: audit: type=1327 audit(1769256027.674:544): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231656336373635363035636330396539323061336134636661353765 Jan 24 12:00:27.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231656336373635363035636330396539323061336134636661353765 Jan 24 12:00:27.720072 kernel: audit: type=1334 audit(1769256027.674:545): prog-id=160 op=LOAD Jan 24 12:00:27.674000 audit: BPF prog-id=160 op=LOAD Jan 24 12:00:27.674000 audit[3479]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3376 pid=3479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:27.735530 kernel: audit: type=1300 audit(1769256027.674:545): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3376 pid=3479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:27.736211 kernel: audit: type=1327 audit(1769256027.674:545): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231656336373635363035636330396539323061336134636661353765 Jan 24 12:00:27.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231656336373635363035636330396539323061336134636661353765 Jan 24 12:00:27.674000 audit: BPF prog-id=160 op=UNLOAD Jan 24 12:00:27.751511 kernel: audit: type=1334 audit(1769256027.674:546): prog-id=160 op=UNLOAD Jan 24 12:00:27.674000 audit[3479]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3376 pid=3479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:27.764292 kernel: audit: type=1300 audit(1769256027.674:546): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3376 pid=3479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:27.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231656336373635363035636330396539323061336134636661353765 Jan 24 12:00:27.774735 containerd[1652]: time="2026-01-24T12:00:27.774527681Z" level=info msg="StartContainer for \"b1ec6765605cc09e920a3a4cfa57e11318a869f4ce2106065bf14b41d2ce631d\" returns successfully" Jan 24 12:00:27.778679 kernel: audit: type=1327 audit(1769256027.674:546): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231656336373635363035636330396539323061336134636661353765 Jan 24 12:00:27.778783 kernel: audit: type=1334 audit(1769256027.674:547): prog-id=159 op=UNLOAD Jan 24 12:00:27.674000 audit: BPF prog-id=159 op=UNLOAD Jan 24 12:00:27.674000 audit[3479]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3376 pid=3479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:27.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231656336373635363035636330396539323061336134636661353765 Jan 24 12:00:27.674000 audit: BPF prog-id=161 op=LOAD Jan 24 12:00:27.674000 audit[3479]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3376 pid=3479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:27.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231656336373635363035636330396539323061336134636661353765 Jan 24 12:00:27.806432 systemd[1]: cri-containerd-b1ec6765605cc09e920a3a4cfa57e11318a869f4ce2106065bf14b41d2ce631d.scope: Deactivated successfully. Jan 24 12:00:27.820000 audit: BPF prog-id=161 op=UNLOAD Jan 24 12:00:27.827987 containerd[1652]: time="2026-01-24T12:00:27.827911023Z" level=info msg="received container exit event container_id:\"b1ec6765605cc09e920a3a4cfa57e11318a869f4ce2106065bf14b41d2ce631d\" id:\"b1ec6765605cc09e920a3a4cfa57e11318a869f4ce2106065bf14b41d2ce631d\" pid:3493 exited_at:{seconds:1769256027 nanos:827220916}" Jan 24 12:00:27.889261 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b1ec6765605cc09e920a3a4cfa57e11318a869f4ce2106065bf14b41d2ce631d-rootfs.mount: Deactivated successfully. Jan 24 12:00:27.893420 kubelet[2876]: E0124 12:00:27.893337 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:00:28.888661 kubelet[2876]: E0124 12:00:28.888488 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-576w8" podUID="3477849f-ef62-42dc-be46-c8edc5b93ccb" Jan 24 12:00:28.901506 kubelet[2876]: E0124 12:00:28.901450 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:00:30.129138 containerd[1652]: time="2026-01-24T12:00:30.129002012Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 12:00:30.132133 containerd[1652]: time="2026-01-24T12:00:30.130730110Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 24 12:00:30.136700 containerd[1652]: time="2026-01-24T12:00:30.132945602Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 12:00:30.138669 containerd[1652]: time="2026-01-24T12:00:30.138479455Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 12:00:30.140242 containerd[1652]: time="2026-01-24T12:00:30.140087270Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.772049202s" Jan 24 12:00:30.140242 containerd[1652]: time="2026-01-24T12:00:30.140163914Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 24 12:00:30.152021 containerd[1652]: time="2026-01-24T12:00:30.150036621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 24 12:00:30.191685 containerd[1652]: time="2026-01-24T12:00:30.191507109Z" level=info msg="CreateContainer within sandbox \"ef212cfb2f43d6cc9282cc2004a9b6991155b45c69a2d8d706c269e46b04fad6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 24 12:00:30.255517 containerd[1652]: time="2026-01-24T12:00:30.255197019Z" level=info msg="Container 9f5462e3ce1372846058502bcf38837038c55538bfd3f1cfb02d20f1a2c78082: CDI devices from CRI Config.CDIDevices: []" Jan 24 12:00:30.289896 containerd[1652]: time="2026-01-24T12:00:30.289188826Z" level=info msg="CreateContainer within sandbox \"ef212cfb2f43d6cc9282cc2004a9b6991155b45c69a2d8d706c269e46b04fad6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9f5462e3ce1372846058502bcf38837038c55538bfd3f1cfb02d20f1a2c78082\"" Jan 24 12:00:30.290648 containerd[1652]: time="2026-01-24T12:00:30.290344781Z" level=info msg="StartContainer for \"9f5462e3ce1372846058502bcf38837038c55538bfd3f1cfb02d20f1a2c78082\"" Jan 24 12:00:30.293759 containerd[1652]: time="2026-01-24T12:00:30.292489733Z" level=info msg="connecting to shim 9f5462e3ce1372846058502bcf38837038c55538bfd3f1cfb02d20f1a2c78082" address="unix:///run/containerd/s/b0a1358c8525b6ec90b8671d4090efd64944609ec8e72a0b013a6222020c8f8d" protocol=ttrpc version=3 Jan 24 12:00:30.374784 systemd[1]: Started cri-containerd-9f5462e3ce1372846058502bcf38837038c55538bfd3f1cfb02d20f1a2c78082.scope - libcontainer container 9f5462e3ce1372846058502bcf38837038c55538bfd3f1cfb02d20f1a2c78082. Jan 24 12:00:30.439000 audit: BPF prog-id=162 op=LOAD Jan 24 12:00:30.440000 audit: BPF prog-id=163 op=LOAD Jan 24 12:00:30.440000 audit[3538]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3323 pid=3538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:30.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966353436326533636531333732383436303538353032626366333838 Jan 24 12:00:30.441000 audit: BPF prog-id=163 op=UNLOAD Jan 24 12:00:30.441000 audit[3538]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3323 pid=3538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:30.441000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966353436326533636531333732383436303538353032626366333838 Jan 24 12:00:30.441000 audit: BPF prog-id=164 op=LOAD Jan 24 12:00:30.441000 audit[3538]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3323 pid=3538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:30.441000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966353436326533636531333732383436303538353032626366333838 Jan 24 12:00:30.441000 audit: BPF prog-id=165 op=LOAD Jan 24 12:00:30.441000 audit[3538]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3323 pid=3538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:30.441000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966353436326533636531333732383436303538353032626366333838 Jan 24 12:00:30.441000 audit: BPF prog-id=165 op=UNLOAD Jan 24 12:00:30.441000 audit[3538]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3323 pid=3538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:30.441000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966353436326533636531333732383436303538353032626366333838 Jan 24 12:00:30.441000 audit: BPF prog-id=164 op=UNLOAD Jan 24 12:00:30.441000 audit[3538]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3323 pid=3538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:30.441000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966353436326533636531333732383436303538353032626366333838 Jan 24 12:00:30.441000 audit: BPF prog-id=166 op=LOAD Jan 24 12:00:30.441000 audit[3538]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3323 pid=3538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:30.441000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966353436326533636531333732383436303538353032626366333838 Jan 24 12:00:30.583749 containerd[1652]: time="2026-01-24T12:00:30.581304314Z" level=info msg="StartContainer for \"9f5462e3ce1372846058502bcf38837038c55538bfd3f1cfb02d20f1a2c78082\" returns successfully" Jan 24 12:00:30.963661 kubelet[2876]: E0124 12:00:30.958232 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-576w8" podUID="3477849f-ef62-42dc-be46-c8edc5b93ccb" Jan 24 12:00:30.976019 kubelet[2876]: E0124 12:00:30.975916 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:00:31.068849 kubelet[2876]: I0124 12:00:31.068298 2876 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5678d64998-jsnfn" podStartSLOduration=2.1099881209999998 podStartE2EDuration="6.067908591s" podCreationTimestamp="2026-01-24 12:00:25 +0000 UTC" firstStartedPulling="2026-01-24 12:00:26.191206562 +0000 UTC m=+33.033418721" lastFinishedPulling="2026-01-24 12:00:30.149127031 +0000 UTC m=+36.991339191" observedRunningTime="2026-01-24 12:00:31.067796875 +0000 UTC m=+37.910009036" watchObservedRunningTime="2026-01-24 12:00:31.067908591 +0000 UTC m=+37.910120752" Jan 24 12:00:31.986112 kubelet[2876]: E0124 12:00:31.985826 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:00:32.099000 audit[3580]: NETFILTER_CFG table=filter:117 family=2 entries=21 op=nft_register_rule pid=3580 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:00:32.099000 audit[3580]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcd08a1500 a2=0 a3=7ffcd08a14ec items=0 ppid=3041 pid=3580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:32.099000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:00:32.129000 audit[3580]: NETFILTER_CFG table=nat:118 family=2 entries=19 op=nft_register_chain pid=3580 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:00:32.129000 audit[3580]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffcd08a1500 a2=0 a3=7ffcd08a14ec items=0 ppid=3041 pid=3580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:32.129000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:00:32.889436 kubelet[2876]: E0124 12:00:32.888848 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-576w8" podUID="3477849f-ef62-42dc-be46-c8edc5b93ccb" Jan 24 12:00:32.989715 kubelet[2876]: E0124 12:00:32.988830 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:00:34.888796 kubelet[2876]: E0124 12:00:34.888304 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-576w8" podUID="3477849f-ef62-42dc-be46-c8edc5b93ccb" Jan 24 12:00:36.889153 kubelet[2876]: E0124 12:00:36.887718 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-576w8" podUID="3477849f-ef62-42dc-be46-c8edc5b93ccb" Jan 24 12:00:37.379354 containerd[1652]: time="2026-01-24T12:00:37.378671526Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 12:00:37.384278 containerd[1652]: time="2026-01-24T12:00:37.384026879Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 24 12:00:37.388694 containerd[1652]: time="2026-01-24T12:00:37.388177624Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 12:00:37.398443 containerd[1652]: time="2026-01-24T12:00:37.398345206Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 12:00:37.400586 containerd[1652]: time="2026-01-24T12:00:37.400243239Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 7.250145794s" Jan 24 12:00:37.400586 containerd[1652]: time="2026-01-24T12:00:37.400308803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 24 12:00:37.426798 containerd[1652]: time="2026-01-24T12:00:37.426648652Z" level=info msg="CreateContainer within sandbox \"159464e4aab2677242792e6aa7054bddd0c5ac366b3f20a84fb9f8fd5fcc4df2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 24 12:00:37.465697 containerd[1652]: time="2026-01-24T12:00:37.465615792Z" level=info msg="Container aa6b8d7f5b02c8374fc3f772047951aa5a06850672a4d33cb8bed5a11cea4320: CDI devices from CRI Config.CDIDevices: []" Jan 24 12:00:37.504716 containerd[1652]: time="2026-01-24T12:00:37.504406711Z" level=info msg="CreateContainer within sandbox \"159464e4aab2677242792e6aa7054bddd0c5ac366b3f20a84fb9f8fd5fcc4df2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"aa6b8d7f5b02c8374fc3f772047951aa5a06850672a4d33cb8bed5a11cea4320\"" Jan 24 12:00:37.505944 containerd[1652]: time="2026-01-24T12:00:37.505912879Z" level=info msg="StartContainer for \"aa6b8d7f5b02c8374fc3f772047951aa5a06850672a4d33cb8bed5a11cea4320\"" Jan 24 12:00:37.532299 containerd[1652]: time="2026-01-24T12:00:37.524810820Z" level=info msg="connecting to shim aa6b8d7f5b02c8374fc3f772047951aa5a06850672a4d33cb8bed5a11cea4320" address="unix:///run/containerd/s/2b563622b22c36b584dce485db145d220d76e670e91f6039b381126b7920b7cb" protocol=ttrpc version=3 Jan 24 12:00:37.601340 systemd[1]: Started cri-containerd-aa6b8d7f5b02c8374fc3f772047951aa5a06850672a4d33cb8bed5a11cea4320.scope - libcontainer container aa6b8d7f5b02c8374fc3f772047951aa5a06850672a4d33cb8bed5a11cea4320. Jan 24 12:00:37.762000 audit: BPF prog-id=167 op=LOAD Jan 24 12:00:37.771019 kernel: kauditd_printk_skb: 34 callbacks suppressed Jan 24 12:00:37.771148 kernel: audit: type=1334 audit(1769256037.762:560): prog-id=167 op=LOAD Jan 24 12:00:37.773683 kernel: audit: type=1300 audit(1769256037.762:560): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3376 pid=3589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:37.762000 audit[3589]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3376 pid=3589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:37.762000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161366238643766356230326338333734666333663737323034373935 Jan 24 12:00:37.817394 kernel: audit: type=1327 audit(1769256037.762:560): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161366238643766356230326338333734666333663737323034373935 Jan 24 12:00:37.762000 audit: BPF prog-id=168 op=LOAD Jan 24 12:00:37.822519 kernel: audit: type=1334 audit(1769256037.762:561): prog-id=168 op=LOAD Jan 24 12:00:37.762000 audit[3589]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3376 pid=3589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:37.843512 kernel: audit: type=1300 audit(1769256037.762:561): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3376 pid=3589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:37.843723 kernel: audit: type=1327 audit(1769256037.762:561): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161366238643766356230326338333734666333663737323034373935 Jan 24 12:00:37.762000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161366238643766356230326338333734666333663737323034373935 Jan 24 12:00:37.862859 kernel: audit: type=1334 audit(1769256037.762:562): prog-id=168 op=UNLOAD Jan 24 12:00:37.762000 audit: BPF prog-id=168 op=UNLOAD Jan 24 12:00:37.762000 audit[3589]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3376 pid=3589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:37.887607 kernel: audit: type=1300 audit(1769256037.762:562): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3376 pid=3589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:37.762000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161366238643766356230326338333734666333663737323034373935 Jan 24 12:00:37.893821 kernel: audit: type=1327 audit(1769256037.762:562): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161366238643766356230326338333734666333663737323034373935 Jan 24 12:00:37.762000 audit: BPF prog-id=167 op=UNLOAD Jan 24 12:00:37.906803 kernel: audit: type=1334 audit(1769256037.762:563): prog-id=167 op=UNLOAD Jan 24 12:00:37.762000 audit[3589]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3376 pid=3589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:37.762000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161366238643766356230326338333734666333663737323034373935 Jan 24 12:00:37.762000 audit: BPF prog-id=169 op=LOAD Jan 24 12:00:37.762000 audit[3589]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3376 pid=3589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:00:37.762000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161366238643766356230326338333734666333663737323034373935 Jan 24 12:00:37.931900 containerd[1652]: time="2026-01-24T12:00:37.931637359Z" level=info msg="StartContainer for \"aa6b8d7f5b02c8374fc3f772047951aa5a06850672a4d33cb8bed5a11cea4320\" returns successfully" Jan 24 12:00:38.043685 kubelet[2876]: E0124 12:00:38.043446 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:00:38.888060 kubelet[2876]: E0124 12:00:38.887919 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-576w8" podUID="3477849f-ef62-42dc-be46-c8edc5b93ccb" Jan 24 12:00:39.053837 kubelet[2876]: E0124 12:00:39.053619 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:00:40.471697 systemd[1]: cri-containerd-aa6b8d7f5b02c8374fc3f772047951aa5a06850672a4d33cb8bed5a11cea4320.scope: Deactivated successfully. Jan 24 12:00:40.474456 systemd[1]: cri-containerd-aa6b8d7f5b02c8374fc3f772047951aa5a06850672a4d33cb8bed5a11cea4320.scope: Consumed 1.676s CPU time, 182.8M memory peak, 4M read from disk, 171.3M written to disk. Jan 24 12:00:40.488943 containerd[1652]: time="2026-01-24T12:00:40.487886138Z" level=info msg="received container exit event container_id:\"aa6b8d7f5b02c8374fc3f772047951aa5a06850672a4d33cb8bed5a11cea4320\" id:\"aa6b8d7f5b02c8374fc3f772047951aa5a06850672a4d33cb8bed5a11cea4320\" pid:3602 exited_at:{seconds:1769256040 nanos:484262122}" Jan 24 12:00:40.494000 audit: BPF prog-id=169 op=UNLOAD Jan 24 12:00:40.658998 kubelet[2876]: I0124 12:00:40.657460 2876 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 24 12:00:40.751180 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa6b8d7f5b02c8374fc3f772047951aa5a06850672a4d33cb8bed5a11cea4320-rootfs.mount: Deactivated successfully. Jan 24 12:00:41.005623 systemd[1]: Created slice kubepods-besteffort-pod9e4f5042_9b9e_42a4_bad4_8066f7c50d50.slice - libcontainer container kubepods-besteffort-pod9e4f5042_9b9e_42a4_bad4_8066f7c50d50.slice. Jan 24 12:00:41.061701 systemd[1]: Created slice kubepods-besteffort-pod7e54efbd_6a62_4db3_8b3c_99aa330f72d1.slice - libcontainer container kubepods-besteffort-pod7e54efbd_6a62_4db3_8b3c_99aa330f72d1.slice. Jan 24 12:00:41.090638 systemd[1]: Created slice kubepods-burstable-pod729c35dc_3b15_46a8_9075_ed539b490113.slice - libcontainer container kubepods-burstable-pod729c35dc_3b15_46a8_9075_ed539b490113.slice. Jan 24 12:00:41.133629 kubelet[2876]: I0124 12:00:41.133486 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e4f5042-9b9e-42a4-bad4-8066f7c50d50-whisker-ca-bundle\") pod \"whisker-6dbd6bdf48-bg4gc\" (UID: \"9e4f5042-9b9e-42a4-bad4-8066f7c50d50\") " pod="calico-system/whisker-6dbd6bdf48-bg4gc" Jan 24 12:00:41.133629 kubelet[2876]: I0124 12:00:41.133622 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/729c35dc-3b15-46a8-9075-ed539b490113-config-volume\") pod \"coredns-66bc5c9577-xzzf9\" (UID: \"729c35dc-3b15-46a8-9075-ed539b490113\") " pod="kube-system/coredns-66bc5c9577-xzzf9" Jan 24 12:00:41.134033 kubelet[2876]: I0124 12:00:41.133675 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7zqt\" (UniqueName: \"kubernetes.io/projected/9e4f5042-9b9e-42a4-bad4-8066f7c50d50-kube-api-access-z7zqt\") pod \"whisker-6dbd6bdf48-bg4gc\" (UID: \"9e4f5042-9b9e-42a4-bad4-8066f7c50d50\") " pod="calico-system/whisker-6dbd6bdf48-bg4gc" Jan 24 12:00:41.134033 kubelet[2876]: I0124 12:00:41.133713 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9e4f5042-9b9e-42a4-bad4-8066f7c50d50-whisker-backend-key-pair\") pod \"whisker-6dbd6bdf48-bg4gc\" (UID: \"9e4f5042-9b9e-42a4-bad4-8066f7c50d50\") " pod="calico-system/whisker-6dbd6bdf48-bg4gc" Jan 24 12:00:41.134033 kubelet[2876]: I0124 12:00:41.133741 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx96d\" (UniqueName: \"kubernetes.io/projected/729c35dc-3b15-46a8-9075-ed539b490113-kube-api-access-tx96d\") pod \"coredns-66bc5c9577-xzzf9\" (UID: \"729c35dc-3b15-46a8-9075-ed539b490113\") " pod="kube-system/coredns-66bc5c9577-xzzf9" Jan 24 12:00:41.134033 kubelet[2876]: I0124 12:00:41.133767 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d0d046c-db5e-4542-a5a5-0466daa13e9a-config-volume\") pod \"coredns-66bc5c9577-9mzkh\" (UID: \"5d0d046c-db5e-4542-a5a5-0466daa13e9a\") " pod="kube-system/coredns-66bc5c9577-9mzkh" Jan 24 12:00:41.134033 kubelet[2876]: I0124 12:00:41.133802 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7e54efbd-6a62-4db3-8b3c-99aa330f72d1-calico-apiserver-certs\") pod \"calico-apiserver-76997bfb4b-55f79\" (UID: \"7e54efbd-6a62-4db3-8b3c-99aa330f72d1\") " pod="calico-apiserver/calico-apiserver-76997bfb4b-55f79" Jan 24 12:00:41.134380 kubelet[2876]: I0124 12:00:41.133829 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jts2\" (UniqueName: \"kubernetes.io/projected/7e54efbd-6a62-4db3-8b3c-99aa330f72d1-kube-api-access-2jts2\") pod \"calico-apiserver-76997bfb4b-55f79\" (UID: \"7e54efbd-6a62-4db3-8b3c-99aa330f72d1\") " pod="calico-apiserver/calico-apiserver-76997bfb4b-55f79" Jan 24 12:00:41.134380 kubelet[2876]: I0124 12:00:41.134348 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lfkq\" (UniqueName: \"kubernetes.io/projected/5d0d046c-db5e-4542-a5a5-0466daa13e9a-kube-api-access-4lfkq\") pod \"coredns-66bc5c9577-9mzkh\" (UID: \"5d0d046c-db5e-4542-a5a5-0466daa13e9a\") " pod="kube-system/coredns-66bc5c9577-9mzkh" Jan 24 12:00:41.154665 systemd[1]: Created slice kubepods-burstable-pod5d0d046c_db5e_4542_a5a5_0466daa13e9a.slice - libcontainer container kubepods-burstable-pod5d0d046c_db5e_4542_a5a5_0466daa13e9a.slice. Jan 24 12:00:41.188723 systemd[1]: Created slice kubepods-besteffort-poddd704f6f_5a9f_42a8_93d9_5d24176bfd82.slice - libcontainer container kubepods-besteffort-poddd704f6f_5a9f_42a8_93d9_5d24176bfd82.slice. Jan 24 12:00:41.206917 systemd[1]: Created slice kubepods-besteffort-pod3477849f_ef62_42dc_be46_c8edc5b93ccb.slice - libcontainer container kubepods-besteffort-pod3477849f_ef62_42dc_be46_c8edc5b93ccb.slice. Jan 24 12:00:41.235158 kubelet[2876]: I0124 12:00:41.235039 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfdtb\" (UniqueName: \"kubernetes.io/projected/543ea964-5bd2-4a2a-be7e-5b64397ea1f6-kube-api-access-cfdtb\") pod \"calico-apiserver-76997bfb4b-ggwxc\" (UID: \"543ea964-5bd2-4a2a-be7e-5b64397ea1f6\") " pod="calico-apiserver/calico-apiserver-76997bfb4b-ggwxc" Jan 24 12:00:41.235158 kubelet[2876]: I0124 12:00:41.235124 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f944553c-3de6-4dea-af30-6e177d6839ad-goldmane-key-pair\") pod \"goldmane-7c778bb748-k2xcd\" (UID: \"f944553c-3de6-4dea-af30-6e177d6839ad\") " pod="calico-system/goldmane-7c778bb748-k2xcd" Jan 24 12:00:41.235328 kubelet[2876]: I0124 12:00:41.235207 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f944553c-3de6-4dea-af30-6e177d6839ad-config\") pod \"goldmane-7c778bb748-k2xcd\" (UID: \"f944553c-3de6-4dea-af30-6e177d6839ad\") " pod="calico-system/goldmane-7c778bb748-k2xcd" Jan 24 12:00:41.235328 kubelet[2876]: I0124 12:00:41.235272 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j87s\" (UniqueName: \"kubernetes.io/projected/dd704f6f-5a9f-42a8-93d9-5d24176bfd82-kube-api-access-5j87s\") pod \"calico-kube-controllers-5d9dddf448-n9r2d\" (UID: \"dd704f6f-5a9f-42a8-93d9-5d24176bfd82\") " pod="calico-system/calico-kube-controllers-5d9dddf448-n9r2d" Jan 24 12:00:41.235328 kubelet[2876]: I0124 12:00:41.235299 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/543ea964-5bd2-4a2a-be7e-5b64397ea1f6-calico-apiserver-certs\") pod \"calico-apiserver-76997bfb4b-ggwxc\" (UID: \"543ea964-5bd2-4a2a-be7e-5b64397ea1f6\") " pod="calico-apiserver/calico-apiserver-76997bfb4b-ggwxc" Jan 24 12:00:41.235430 kubelet[2876]: I0124 12:00:41.235325 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9llst\" (UniqueName: \"kubernetes.io/projected/f944553c-3de6-4dea-af30-6e177d6839ad-kube-api-access-9llst\") pod \"goldmane-7c778bb748-k2xcd\" (UID: \"f944553c-3de6-4dea-af30-6e177d6839ad\") " pod="calico-system/goldmane-7c778bb748-k2xcd" Jan 24 12:00:41.235430 kubelet[2876]: I0124 12:00:41.235383 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd704f6f-5a9f-42a8-93d9-5d24176bfd82-tigera-ca-bundle\") pod \"calico-kube-controllers-5d9dddf448-n9r2d\" (UID: \"dd704f6f-5a9f-42a8-93d9-5d24176bfd82\") " pod="calico-system/calico-kube-controllers-5d9dddf448-n9r2d" Jan 24 12:00:41.235430 kubelet[2876]: I0124 12:00:41.235418 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f944553c-3de6-4dea-af30-6e177d6839ad-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-k2xcd\" (UID: \"f944553c-3de6-4dea-af30-6e177d6839ad\") " pod="calico-system/goldmane-7c778bb748-k2xcd" Jan 24 12:00:41.271730 containerd[1652]: time="2026-01-24T12:00:41.263997228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-576w8,Uid:3477849f-ef62-42dc-be46-c8edc5b93ccb,Namespace:calico-system,Attempt:0,}" Jan 24 12:00:41.283611 kubelet[2876]: E0124 12:00:41.283454 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:00:41.293898 containerd[1652]: time="2026-01-24T12:00:41.291910284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 24 12:00:41.363742 systemd[1]: Created slice kubepods-besteffort-pod543ea964_5bd2_4a2a_be7e_5b64397ea1f6.slice - libcontainer container kubepods-besteffort-pod543ea964_5bd2_4a2a_be7e_5b64397ea1f6.slice. Jan 24 12:00:41.378430 containerd[1652]: time="2026-01-24T12:00:41.371983459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dbd6bdf48-bg4gc,Uid:9e4f5042-9b9e-42a4-bad4-8066f7c50d50,Namespace:calico-system,Attempt:0,}" Jan 24 12:00:41.380466 systemd[1]: Created slice kubepods-besteffort-podf944553c_3de6_4dea_af30_6e177d6839ad.slice - libcontainer container kubepods-besteffort-podf944553c_3de6_4dea_af30_6e177d6839ad.slice. Jan 24 12:00:41.450610 kubelet[2876]: E0124 12:00:41.449269 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:00:41.464176 containerd[1652]: time="2026-01-24T12:00:41.461933796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76997bfb4b-55f79,Uid:7e54efbd-6a62-4db3-8b3c-99aa330f72d1,Namespace:calico-apiserver,Attempt:0,}" Jan 24 12:00:41.464176 containerd[1652]: time="2026-01-24T12:00:41.463810927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xzzf9,Uid:729c35dc-3b15-46a8-9075-ed539b490113,Namespace:kube-system,Attempt:0,}" Jan 24 12:00:41.472821 kubelet[2876]: E0124 12:00:41.472706 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:00:41.479078 containerd[1652]: time="2026-01-24T12:00:41.479031627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9mzkh,Uid:5d0d046c-db5e-4542-a5a5-0466daa13e9a,Namespace:kube-system,Attempt:0,}" Jan 24 12:00:41.601437 containerd[1652]: time="2026-01-24T12:00:41.601124006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d9dddf448-n9r2d,Uid:dd704f6f-5a9f-42a8-93d9-5d24176bfd82,Namespace:calico-system,Attempt:0,}" Jan 24 12:00:41.734285 containerd[1652]: time="2026-01-24T12:00:41.734110582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-k2xcd,Uid:f944553c-3de6-4dea-af30-6e177d6839ad,Namespace:calico-system,Attempt:0,}" Jan 24 12:00:41.734285 containerd[1652]: time="2026-01-24T12:00:41.734249043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76997bfb4b-ggwxc,Uid:543ea964-5bd2-4a2a-be7e-5b64397ea1f6,Namespace:calico-apiserver,Attempt:0,}" Jan 24 12:00:42.097257 containerd[1652]: time="2026-01-24T12:00:42.095825860Z" level=error msg="Failed to destroy network for sandbox \"39f7fcbd907e1166b05d0e2b52e90fc59b3bf383a0c8a00444e0d68a7b54da2d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:00:42.190646 containerd[1652]: time="2026-01-24T12:00:42.190458416Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dbd6bdf48-bg4gc,Uid:9e4f5042-9b9e-42a4-bad4-8066f7c50d50,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"39f7fcbd907e1166b05d0e2b52e90fc59b3bf383a0c8a00444e0d68a7b54da2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:00:42.191744 kubelet[2876]: E0124 12:00:42.191602 2876 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39f7fcbd907e1166b05d0e2b52e90fc59b3bf383a0c8a00444e0d68a7b54da2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:00:42.192718 kubelet[2876]: E0124 12:00:42.191827 2876 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39f7fcbd907e1166b05d0e2b52e90fc59b3bf383a0c8a00444e0d68a7b54da2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6dbd6bdf48-bg4gc" Jan 24 12:00:42.192718 kubelet[2876]: E0124 12:00:42.192120 2876 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39f7fcbd907e1166b05d0e2b52e90fc59b3bf383a0c8a00444e0d68a7b54da2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6dbd6bdf48-bg4gc" Jan 24 12:00:42.196449 kubelet[2876]: E0124 12:00:42.194771 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6dbd6bdf48-bg4gc_calico-system(9e4f5042-9b9e-42a4-bad4-8066f7c50d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6dbd6bdf48-bg4gc_calico-system(9e4f5042-9b9e-42a4-bad4-8066f7c50d50)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"39f7fcbd907e1166b05d0e2b52e90fc59b3bf383a0c8a00444e0d68a7b54da2d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6dbd6bdf48-bg4gc" podUID="9e4f5042-9b9e-42a4-bad4-8066f7c50d50" Jan 24 12:00:42.274418 containerd[1652]: time="2026-01-24T12:00:42.273654044Z" level=error msg="Failed to destroy network for sandbox \"cc673abe9aa1e572ee5bac4f36874c96623577810d3cdf1cd024dac110e0e352\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:00:42.321319 containerd[1652]: time="2026-01-24T12:00:42.321170886Z" level=error msg="Failed to destroy network for sandbox \"2b878a39c9f49cc79529d32ae42ace7a83c97e114acc18bc6efdc9bc88e4c9da\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:00:42.330358 containerd[1652]: time="2026-01-24T12:00:42.326812433Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-576w8,Uid:3477849f-ef62-42dc-be46-c8edc5b93ccb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc673abe9aa1e572ee5bac4f36874c96623577810d3cdf1cd024dac110e0e352\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:00:42.360002 kubelet[2876]: E0124 12:00:42.358302 2876 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc673abe9aa1e572ee5bac4f36874c96623577810d3cdf1cd024dac110e0e352\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:00:42.367281 containerd[1652]: time="2026-01-24T12:00:42.366247959Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xzzf9,Uid:729c35dc-3b15-46a8-9075-ed539b490113,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b878a39c9f49cc79529d32ae42ace7a83c97e114acc18bc6efdc9bc88e4c9da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:00:42.367494 kubelet[2876]: E0124 12:00:42.366847 2876 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b878a39c9f49cc79529d32ae42ace7a83c97e114acc18bc6efdc9bc88e4c9da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:00:42.367494 kubelet[2876]: E0124 12:00:42.367002 2876 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b878a39c9f49cc79529d32ae42ace7a83c97e114acc18bc6efdc9bc88e4c9da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xzzf9" Jan 24 12:00:42.367494 kubelet[2876]: E0124 12:00:42.367035 2876 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b878a39c9f49cc79529d32ae42ace7a83c97e114acc18bc6efdc9bc88e4c9da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xzzf9" Jan 24 12:00:42.367830 kubelet[2876]: E0124 12:00:42.367174 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-xzzf9_kube-system(729c35dc-3b15-46a8-9075-ed539b490113)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-xzzf9_kube-system(729c35dc-3b15-46a8-9075-ed539b490113)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b878a39c9f49cc79529d32ae42ace7a83c97e114acc18bc6efdc9bc88e4c9da\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-xzzf9" podUID="729c35dc-3b15-46a8-9075-ed539b490113" Jan 24 12:00:42.368347 kubelet[2876]: E0124 12:00:42.368309 2876 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc673abe9aa1e572ee5bac4f36874c96623577810d3cdf1cd024dac110e0e352\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-576w8" Jan 24 12:00:42.370449 kubelet[2876]: E0124 12:00:42.370313 2876 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc673abe9aa1e572ee5bac4f36874c96623577810d3cdf1cd024dac110e0e352\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-576w8" Jan 24 12:00:42.370449 kubelet[2876]: E0124 12:00:42.370398 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-576w8_calico-system(3477849f-ef62-42dc-be46-c8edc5b93ccb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-576w8_calico-system(3477849f-ef62-42dc-be46-c8edc5b93ccb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc673abe9aa1e572ee5bac4f36874c96623577810d3cdf1cd024dac110e0e352\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-576w8" podUID="3477849f-ef62-42dc-be46-c8edc5b93ccb" Jan 24 12:00:42.390791 containerd[1652]: time="2026-01-24T12:00:42.390728839Z" level=error msg="Failed to destroy network for sandbox \"316cd3009c5b95db80288986d7794cc9bf7f12918c8cd24fe0c32dd72063cdff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:00:42.422677 containerd[1652]: time="2026-01-24T12:00:42.413837468Z" level=error msg="Failed to destroy network for sandbox \"21adb6d95b1b948182e4d67b95c65772dcd073cf7ec4ed77603cf46e5647a651\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:00:42.422677 containerd[1652]: time="2026-01-24T12:00:42.421650203Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76997bfb4b-55f79,Uid:7e54efbd-6a62-4db3-8b3c-99aa330f72d1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"316cd3009c5b95db80288986d7794cc9bf7f12918c8cd24fe0c32dd72063cdff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:00:42.423966 kubelet[2876]: E0124 12:00:42.423400 2876 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"316cd3009c5b95db80288986d7794cc9bf7f12918c8cd24fe0c32dd72063cdff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:00:42.423966 kubelet[2876]: E0124 12:00:42.423482 2876 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"316cd3009c5b95db80288986d7794cc9bf7f12918c8cd24fe0c32dd72063cdff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76997bfb4b-55f79" Jan 24 12:00:42.423966 kubelet[2876]: E0124 12:00:42.423511 2876 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"316cd3009c5b95db80288986d7794cc9bf7f12918c8cd24fe0c32dd72063cdff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76997bfb4b-55f79" Jan 24 12:00:42.424292 kubelet[2876]: E0124 12:00:42.423654 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76997bfb4b-55f79_calico-apiserver(7e54efbd-6a62-4db3-8b3c-99aa330f72d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76997bfb4b-55f79_calico-apiserver(7e54efbd-6a62-4db3-8b3c-99aa330f72d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"316cd3009c5b95db80288986d7794cc9bf7f12918c8cd24fe0c32dd72063cdff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76997bfb4b-55f79" podUID="7e54efbd-6a62-4db3-8b3c-99aa330f72d1" Jan 24 12:00:42.438906 containerd[1652]: time="2026-01-24T12:00:42.438795571Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d9dddf448-n9r2d,Uid:dd704f6f-5a9f-42a8-93d9-5d24176bfd82,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"21adb6d95b1b948182e4d67b95c65772dcd073cf7ec4ed77603cf46e5647a651\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:00:42.442024 kubelet[2876]: E0124 12:00:42.441756 2876 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21adb6d95b1b948182e4d67b95c65772dcd073cf7ec4ed77603cf46e5647a651\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:00:42.442024 kubelet[2876]: E0124 12:00:42.441851 2876 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21adb6d95b1b948182e4d67b95c65772dcd073cf7ec4ed77603cf46e5647a651\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d9dddf448-n9r2d" Jan 24 12:00:42.442024 kubelet[2876]: E0124 12:00:42.441913 2876 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21adb6d95b1b948182e4d67b95c65772dcd073cf7ec4ed77603cf46e5647a651\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d9dddf448-n9r2d" Jan 24 12:00:42.442278 kubelet[2876]: E0124 12:00:42.441997 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5d9dddf448-n9r2d_calico-system(dd704f6f-5a9f-42a8-93d9-5d24176bfd82)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5d9dddf448-n9r2d_calico-system(dd704f6f-5a9f-42a8-93d9-5d24176bfd82)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21adb6d95b1b948182e4d67b95c65772dcd073cf7ec4ed77603cf46e5647a651\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d9dddf448-n9r2d" podUID="dd704f6f-5a9f-42a8-93d9-5d24176bfd82" Jan 24 12:00:42.447975 containerd[1652]: time="2026-01-24T12:00:42.447910756Z" level=error msg="Failed to destroy network for sandbox \"b8910bc900443d7dfa5898f850427312c28588fe6449c6c797c9324f1680bf64\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:00:42.448520 containerd[1652]: time="2026-01-24T12:00:42.448341448Z" level=error msg="Failed to destroy network for sandbox \"d05d660bd06834d15aeb3621061c8b83e1a22adaea0c13d11bd00359bca9c20a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:00:42.466610 containerd[1652]: time="2026-01-24T12:00:42.466234227Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-k2xcd,Uid:f944553c-3de6-4dea-af30-6e177d6839ad,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8910bc900443d7dfa5898f850427312c28588fe6449c6c797c9324f1680bf64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:00:42.470439 kubelet[2876]: E0124 12:00:42.467207 2876 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8910bc900443d7dfa5898f850427312c28588fe6449c6c797c9324f1680bf64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:00:42.470439 kubelet[2876]: E0124 12:00:42.467296 2876 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8910bc900443d7dfa5898f850427312c28588fe6449c6c797c9324f1680bf64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-k2xcd" Jan 24 12:00:42.470439 kubelet[2876]: E0124 12:00:42.467329 2876 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8910bc900443d7dfa5898f850427312c28588fe6449c6c797c9324f1680bf64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-k2xcd" Jan 24 12:00:42.470699 kubelet[2876]: E0124 12:00:42.467393 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-k2xcd_calico-system(f944553c-3de6-4dea-af30-6e177d6839ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-k2xcd_calico-system(f944553c-3de6-4dea-af30-6e177d6839ad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8910bc900443d7dfa5898f850427312c28588fe6449c6c797c9324f1680bf64\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-k2xcd" podUID="f944553c-3de6-4dea-af30-6e177d6839ad" Jan 24 12:00:42.503155 containerd[1652]: time="2026-01-24T12:00:42.500323546Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9mzkh,Uid:5d0d046c-db5e-4542-a5a5-0466daa13e9a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d05d660bd06834d15aeb3621061c8b83e1a22adaea0c13d11bd00359bca9c20a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:00:42.503423 kubelet[2876]: E0124 12:00:42.500768 2876 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d05d660bd06834d15aeb3621061c8b83e1a22adaea0c13d11bd00359bca9c20a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:00:42.503423 kubelet[2876]: E0124 12:00:42.500857 2876 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d05d660bd06834d15aeb3621061c8b83e1a22adaea0c13d11bd00359bca9c20a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-9mzkh" Jan 24 12:00:42.503423 kubelet[2876]: E0124 12:00:42.500938 2876 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d05d660bd06834d15aeb3621061c8b83e1a22adaea0c13d11bd00359bca9c20a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-9mzkh" Jan 24 12:00:42.503706 kubelet[2876]: E0124 12:00:42.501010 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-9mzkh_kube-system(5d0d046c-db5e-4542-a5a5-0466daa13e9a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-9mzkh_kube-system(5d0d046c-db5e-4542-a5a5-0466daa13e9a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d05d660bd06834d15aeb3621061c8b83e1a22adaea0c13d11bd00359bca9c20a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-9mzkh" podUID="5d0d046c-db5e-4542-a5a5-0466daa13e9a" Jan 24 12:00:42.543135 containerd[1652]: time="2026-01-24T12:00:42.543069340Z" level=error msg="Failed to destroy network for sandbox \"9bd6cad1b0680dd03e6f4f34899e401e5a2330309de6b1546a619fcf30dd8e92\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:00:42.564286 containerd[1652]: time="2026-01-24T12:00:42.563861753Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76997bfb4b-ggwxc,Uid:543ea964-5bd2-4a2a-be7e-5b64397ea1f6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bd6cad1b0680dd03e6f4f34899e401e5a2330309de6b1546a619fcf30dd8e92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:00:42.565308 kubelet[2876]: E0124 12:00:42.565056 2876 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bd6cad1b0680dd03e6f4f34899e401e5a2330309de6b1546a619fcf30dd8e92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:00:42.565308 kubelet[2876]: E0124 12:00:42.565135 2876 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bd6cad1b0680dd03e6f4f34899e401e5a2330309de6b1546a619fcf30dd8e92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76997bfb4b-ggwxc" Jan 24 12:00:42.565308 kubelet[2876]: E0124 12:00:42.565167 2876 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bd6cad1b0680dd03e6f4f34899e401e5a2330309de6b1546a619fcf30dd8e92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76997bfb4b-ggwxc" Jan 24 12:00:42.565526 kubelet[2876]: E0124 12:00:42.565278 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76997bfb4b-ggwxc_calico-apiserver(543ea964-5bd2-4a2a-be7e-5b64397ea1f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76997bfb4b-ggwxc_calico-apiserver(543ea964-5bd2-4a2a-be7e-5b64397ea1f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9bd6cad1b0680dd03e6f4f34899e401e5a2330309de6b1546a619fcf30dd8e92\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76997bfb4b-ggwxc" podUID="543ea964-5bd2-4a2a-be7e-5b64397ea1f6" Jan 24 12:00:42.669813 systemd[1]: run-netns-cni\x2d891670d5\x2d0b2a\x2d990b\x2d1041\x2deba52ac25075.mount: Deactivated successfully. Jan 24 12:00:42.670015 systemd[1]: run-netns-cni\x2dbdfbe7b6\x2df330\x2df562\x2d74cc\x2daa8e6777cffa.mount: Deactivated successfully. Jan 24 12:00:42.670110 systemd[1]: run-netns-cni\x2d914566f7\x2d96c5\x2d4dca\x2dfc5a\x2dd2f0d09215f4.mount: Deactivated successfully. Jan 24 12:00:42.670205 systemd[1]: run-netns-cni\x2d90489688\x2d5007\x2d1964\x2d9968\x2d80d71a8c7e33.mount: Deactivated successfully. Jan 24 12:00:42.670293 systemd[1]: run-netns-cni\x2d5d9063ae\x2dce16\x2d537f\x2d9d9e\x2d1adee20642a5.mount: Deactivated successfully. Jan 24 12:00:42.670424 systemd[1]: run-netns-cni\x2d48cda587\x2d7457\x2dba28\x2d181a\x2de2862030fc4a.mount: Deactivated successfully. Jan 24 12:00:42.670511 systemd[1]: run-netns-cni\x2dbceb8e34\x2dc0f3\x2d13fe\x2d61ea\x2d3e712a3f5663.mount: Deactivated successfully. Jan 24 12:00:54.964918 kubelet[2876]: E0124 12:00:54.964820 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:00:54.966706 containerd[1652]: time="2026-01-24T12:00:54.966434505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9mzkh,Uid:5d0d046c-db5e-4542-a5a5-0466daa13e9a,Namespace:kube-system,Attempt:0,}" Jan 24 12:00:54.981996 containerd[1652]: time="2026-01-24T12:00:54.981886318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-576w8,Uid:3477849f-ef62-42dc-be46-c8edc5b93ccb,Namespace:calico-system,Attempt:0,}" Jan 24 12:00:55.282422 containerd[1652]: time="2026-01-24T12:00:55.280629499Z" level=error msg="Failed to destroy network for sandbox \"b55891391373b33ecd5c54bf6812cc4f4f112237ab0bc0dbc08dccf941fbb4e4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:00:55.288960 systemd[1]: run-netns-cni\x2d108e51d9\x2d646f\x2d4a34\x2d9923\x2da669bca4c7d4.mount: Deactivated successfully. Jan 24 12:00:55.336391 containerd[1652]: time="2026-01-24T12:00:55.325108501Z" level=error msg="Failed to destroy network for sandbox \"b67557d0ae6dafce49ab1b66528cc1969000646cf76bacd56aed2ae63e775c4f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:00:55.339951 systemd[1]: run-netns-cni\x2de406793a\x2d4efa\x2d224d\x2d3900\x2d7c0eab7f9268.mount: Deactivated successfully. Jan 24 12:00:55.351632 containerd[1652]: time="2026-01-24T12:00:55.346926954Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9mzkh,Uid:5d0d046c-db5e-4542-a5a5-0466daa13e9a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b55891391373b33ecd5c54bf6812cc4f4f112237ab0bc0dbc08dccf941fbb4e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:00:55.353847 kubelet[2876]: E0124 12:00:55.352083 2876 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b55891391373b33ecd5c54bf6812cc4f4f112237ab0bc0dbc08dccf941fbb4e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:00:55.353847 kubelet[2876]: E0124 12:00:55.352178 2876 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b55891391373b33ecd5c54bf6812cc4f4f112237ab0bc0dbc08dccf941fbb4e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-9mzkh" Jan 24 12:00:55.353847 kubelet[2876]: E0124 12:00:55.352208 2876 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b55891391373b33ecd5c54bf6812cc4f4f112237ab0bc0dbc08dccf941fbb4e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-9mzkh" Jan 24 12:00:55.354051 kubelet[2876]: E0124 12:00:55.352373 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-9mzkh_kube-system(5d0d046c-db5e-4542-a5a5-0466daa13e9a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-9mzkh_kube-system(5d0d046c-db5e-4542-a5a5-0466daa13e9a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b55891391373b33ecd5c54bf6812cc4f4f112237ab0bc0dbc08dccf941fbb4e4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-9mzkh" podUID="5d0d046c-db5e-4542-a5a5-0466daa13e9a" Jan 24 12:00:55.356618 containerd[1652]: time="2026-01-24T12:00:55.356332598Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-576w8,Uid:3477849f-ef62-42dc-be46-c8edc5b93ccb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b67557d0ae6dafce49ab1b66528cc1969000646cf76bacd56aed2ae63e775c4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:00:55.357796 kubelet[2876]: E0124 12:00:55.357468 2876 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b67557d0ae6dafce49ab1b66528cc1969000646cf76bacd56aed2ae63e775c4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:00:55.357796 kubelet[2876]: E0124 12:00:55.357597 2876 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b67557d0ae6dafce49ab1b66528cc1969000646cf76bacd56aed2ae63e775c4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-576w8" Jan 24 12:00:55.357796 kubelet[2876]: E0124 12:00:55.357628 2876 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b67557d0ae6dafce49ab1b66528cc1969000646cf76bacd56aed2ae63e775c4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-576w8" Jan 24 12:00:55.357939 kubelet[2876]: E0124 12:00:55.357685 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-576w8_calico-system(3477849f-ef62-42dc-be46-c8edc5b93ccb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-576w8_calico-system(3477849f-ef62-42dc-be46-c8edc5b93ccb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b67557d0ae6dafce49ab1b66528cc1969000646cf76bacd56aed2ae63e775c4f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-576w8" podUID="3477849f-ef62-42dc-be46-c8edc5b93ccb" Jan 24 12:00:55.925480 containerd[1652]: time="2026-01-24T12:00:55.905519599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76997bfb4b-55f79,Uid:7e54efbd-6a62-4db3-8b3c-99aa330f72d1,Namespace:calico-apiserver,Attempt:0,}" Jan 24 12:00:55.935461 containerd[1652]: time="2026-01-24T12:00:55.934667190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76997bfb4b-ggwxc,Uid:543ea964-5bd2-4a2a-be7e-5b64397ea1f6,Namespace:calico-apiserver,Attempt:0,}" Jan 24 12:00:56.248186 containerd[1652]: time="2026-01-24T12:00:56.247747804Z" level=error msg="Failed to destroy network for sandbox \"2a26cd8a7255e20f8bb02abc10980b02ec407d727e73c2765e782de0769b5185\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:00:56.261813 systemd[1]: run-netns-cni\x2db94b1d47\x2d7250\x2de24e\x2d3ec4\x2d55bbe67b64b1.mount: Deactivated successfully. Jan 24 12:00:56.279931 containerd[1652]: time="2026-01-24T12:00:56.277217765Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76997bfb4b-ggwxc,Uid:543ea964-5bd2-4a2a-be7e-5b64397ea1f6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a26cd8a7255e20f8bb02abc10980b02ec407d727e73c2765e782de0769b5185\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:00:56.280167 kubelet[2876]: E0124 12:00:56.277785 2876 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a26cd8a7255e20f8bb02abc10980b02ec407d727e73c2765e782de0769b5185\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:00:56.280167 kubelet[2876]: E0124 12:00:56.277936 2876 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a26cd8a7255e20f8bb02abc10980b02ec407d727e73c2765e782de0769b5185\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76997bfb4b-ggwxc" Jan 24 12:00:56.280167 kubelet[2876]: E0124 12:00:56.277968 2876 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a26cd8a7255e20f8bb02abc10980b02ec407d727e73c2765e782de0769b5185\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76997bfb4b-ggwxc" Jan 24 12:00:56.280872 kubelet[2876]: E0124 12:00:56.278155 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76997bfb4b-ggwxc_calico-apiserver(543ea964-5bd2-4a2a-be7e-5b64397ea1f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76997bfb4b-ggwxc_calico-apiserver(543ea964-5bd2-4a2a-be7e-5b64397ea1f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2a26cd8a7255e20f8bb02abc10980b02ec407d727e73c2765e782de0769b5185\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76997bfb4b-ggwxc" podUID="543ea964-5bd2-4a2a-be7e-5b64397ea1f6" Jan 24 12:00:56.302091 containerd[1652]: time="2026-01-24T12:00:56.301911114Z" level=error msg="Failed to destroy network for sandbox \"c160bedbe56875ffabc8c73874c63648b659db81219da9426dc347f731c5a630\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:00:56.338310 systemd[1]: run-netns-cni\x2d2ee43287\x2de280\x2d1606\x2db554\x2d52c1d9babb8b.mount: Deactivated successfully. Jan 24 12:00:56.345425 containerd[1652]: time="2026-01-24T12:00:56.342645000Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76997bfb4b-55f79,Uid:7e54efbd-6a62-4db3-8b3c-99aa330f72d1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c160bedbe56875ffabc8c73874c63648b659db81219da9426dc347f731c5a630\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:00:56.345641 kubelet[2876]: E0124 12:00:56.343887 2876 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c160bedbe56875ffabc8c73874c63648b659db81219da9426dc347f731c5a630\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:00:56.345641 kubelet[2876]: E0124 12:00:56.343973 2876 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c160bedbe56875ffabc8c73874c63648b659db81219da9426dc347f731c5a630\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76997bfb4b-55f79" Jan 24 12:00:56.345641 kubelet[2876]: E0124 12:00:56.344005 2876 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c160bedbe56875ffabc8c73874c63648b659db81219da9426dc347f731c5a630\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76997bfb4b-55f79" Jan 24 12:00:56.351784 kubelet[2876]: E0124 12:00:56.345927 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76997bfb4b-55f79_calico-apiserver(7e54efbd-6a62-4db3-8b3c-99aa330f72d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76997bfb4b-55f79_calico-apiserver(7e54efbd-6a62-4db3-8b3c-99aa330f72d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c160bedbe56875ffabc8c73874c63648b659db81219da9426dc347f731c5a630\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76997bfb4b-55f79" podUID="7e54efbd-6a62-4db3-8b3c-99aa330f72d1" Jan 24 12:00:56.906152 containerd[1652]: time="2026-01-24T12:00:56.906099103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-k2xcd,Uid:f944553c-3de6-4dea-af30-6e177d6839ad,Namespace:calico-system,Attempt:0,}" Jan 24 12:00:56.940388 containerd[1652]: time="2026-01-24T12:00:56.939436446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d9dddf448-n9r2d,Uid:dd704f6f-5a9f-42a8-93d9-5d24176bfd82,Namespace:calico-system,Attempt:0,}" Jan 24 12:00:56.966581 containerd[1652]: time="2026-01-24T12:00:56.966432229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dbd6bdf48-bg4gc,Uid:9e4f5042-9b9e-42a4-bad4-8066f7c50d50,Namespace:calico-system,Attempt:0,}" Jan 24 12:00:56.967050 kubelet[2876]: E0124 12:00:56.966761 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:00:56.973698 containerd[1652]: time="2026-01-24T12:00:56.973503656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xzzf9,Uid:729c35dc-3b15-46a8-9075-ed539b490113,Namespace:kube-system,Attempt:0,}" Jan 24 12:00:57.277202 containerd[1652]: time="2026-01-24T12:00:57.276824785Z" level=error msg="Failed to destroy network for sandbox \"a7f2a6dc516db4fd2834a59ae385facba23968f7f0dd92585bf234f9a18de820\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:00:57.296408 systemd[1]: run-netns-cni\x2d3e5f1746\x2d622a\x2d15ca\x2d4cb6\x2dfdea634ac691.mount: Deactivated successfully. Jan 24 12:00:57.304459 containerd[1652]: time="2026-01-24T12:00:57.304357394Z" level=error msg="Failed to destroy network for sandbox \"dddc751cb9776828f25f6a1e08306e4339ab0ae8fef4bcec30cfb4cd6614880a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:00:57.318408 systemd[1]: run-netns-cni\x2d4546186c\x2db628\x2db05d\x2d8949\x2d5207188e448c.mount: Deactivated successfully. Jan 24 12:00:57.336668 containerd[1652]: time="2026-01-24T12:00:57.329476979Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xzzf9,Uid:729c35dc-3b15-46a8-9075-ed539b490113,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7f2a6dc516db4fd2834a59ae385facba23968f7f0dd92585bf234f9a18de820\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:00:57.336668 containerd[1652]: time="2026-01-24T12:00:57.331466505Z" level=error msg="Failed to destroy network for sandbox \"c0ee03a521312747b56bfeb3899791c397d8075211e1bb58203311bd6aad4f3d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:00:57.336965 kubelet[2876]: E0124 12:00:57.329957 2876 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7f2a6dc516db4fd2834a59ae385facba23968f7f0dd92585bf234f9a18de820\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:00:57.336965 kubelet[2876]: E0124 12:00:57.330031 2876 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7f2a6dc516db4fd2834a59ae385facba23968f7f0dd92585bf234f9a18de820\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xzzf9" Jan 24 12:00:57.336965 kubelet[2876]: E0124 12:00:57.330091 2876 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7f2a6dc516db4fd2834a59ae385facba23968f7f0dd92585bf234f9a18de820\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xzzf9" Jan 24 12:00:57.339187 kubelet[2876]: E0124 12:00:57.330172 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-xzzf9_kube-system(729c35dc-3b15-46a8-9075-ed539b490113)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-xzzf9_kube-system(729c35dc-3b15-46a8-9075-ed539b490113)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a7f2a6dc516db4fd2834a59ae385facba23968f7f0dd92585bf234f9a18de820\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-xzzf9" podUID="729c35dc-3b15-46a8-9075-ed539b490113" Jan 24 12:00:57.339421 systemd[1]: run-netns-cni\x2db6e6c5ca\x2db676\x2d845c\x2d9db6\x2d49774329eb25.mount: Deactivated successfully. Jan 24 12:00:57.340175 containerd[1652]: time="2026-01-24T12:00:57.339775610Z" level=error msg="Failed to destroy network for sandbox \"2484e4d53fec5a5c61d4b105e412cfbb2afd455f0fbac934868c615f49f4191a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:00:57.346853 containerd[1652]: time="2026-01-24T12:00:57.346659063Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d9dddf448-n9r2d,Uid:dd704f6f-5a9f-42a8-93d9-5d24176bfd82,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0ee03a521312747b56bfeb3899791c397d8075211e1bb58203311bd6aad4f3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:00:57.351639 kubelet[2876]: E0124 12:00:57.351457 2876 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0ee03a521312747b56bfeb3899791c397d8075211e1bb58203311bd6aad4f3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:00:57.351953 kubelet[2876]: E0124 12:00:57.351672 2876 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0ee03a521312747b56bfeb3899791c397d8075211e1bb58203311bd6aad4f3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d9dddf448-n9r2d" Jan 24 12:00:57.351953 kubelet[2876]: E0124 12:00:57.351704 2876 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0ee03a521312747b56bfeb3899791c397d8075211e1bb58203311bd6aad4f3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d9dddf448-n9r2d" Jan 24 12:00:57.351953 kubelet[2876]: E0124 12:00:57.351812 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5d9dddf448-n9r2d_calico-system(dd704f6f-5a9f-42a8-93d9-5d24176bfd82)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5d9dddf448-n9r2d_calico-system(dd704f6f-5a9f-42a8-93d9-5d24176bfd82)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c0ee03a521312747b56bfeb3899791c397d8075211e1bb58203311bd6aad4f3d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d9dddf448-n9r2d" podUID="dd704f6f-5a9f-42a8-93d9-5d24176bfd82" Jan 24 12:00:57.352824 containerd[1652]: time="2026-01-24T12:00:57.352597940Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-k2xcd,Uid:f944553c-3de6-4dea-af30-6e177d6839ad,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dddc751cb9776828f25f6a1e08306e4339ab0ae8fef4bcec30cfb4cd6614880a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:00:57.352969 kubelet[2876]: E0124 12:00:57.352904 2876 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dddc751cb9776828f25f6a1e08306e4339ab0ae8fef4bcec30cfb4cd6614880a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:00:57.352969 kubelet[2876]: E0124 12:00:57.352947 2876 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dddc751cb9776828f25f6a1e08306e4339ab0ae8fef4bcec30cfb4cd6614880a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-k2xcd" Jan 24 12:00:57.353072 kubelet[2876]: E0124 12:00:57.352971 2876 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dddc751cb9776828f25f6a1e08306e4339ab0ae8fef4bcec30cfb4cd6614880a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-k2xcd" Jan 24 12:00:57.353072 kubelet[2876]: E0124 12:00:57.353027 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-k2xcd_calico-system(f944553c-3de6-4dea-af30-6e177d6839ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-k2xcd_calico-system(f944553c-3de6-4dea-af30-6e177d6839ad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dddc751cb9776828f25f6a1e08306e4339ab0ae8fef4bcec30cfb4cd6614880a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-k2xcd" podUID="f944553c-3de6-4dea-af30-6e177d6839ad" Jan 24 12:00:57.362391 containerd[1652]: time="2026-01-24T12:00:57.362113556Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dbd6bdf48-bg4gc,Uid:9e4f5042-9b9e-42a4-bad4-8066f7c50d50,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2484e4d53fec5a5c61d4b105e412cfbb2afd455f0fbac934868c615f49f4191a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:00:57.362865 kubelet[2876]: E0124 12:00:57.362496 2876 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2484e4d53fec5a5c61d4b105e412cfbb2afd455f0fbac934868c615f49f4191a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:00:57.362865 kubelet[2876]: E0124 12:00:57.362617 2876 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2484e4d53fec5a5c61d4b105e412cfbb2afd455f0fbac934868c615f49f4191a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6dbd6bdf48-bg4gc" Jan 24 12:00:57.362865 kubelet[2876]: E0124 12:00:57.362647 2876 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2484e4d53fec5a5c61d4b105e412cfbb2afd455f0fbac934868c615f49f4191a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6dbd6bdf48-bg4gc" Jan 24 12:00:57.363501 kubelet[2876]: E0124 12:00:57.362704 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6dbd6bdf48-bg4gc_calico-system(9e4f5042-9b9e-42a4-bad4-8066f7c50d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6dbd6bdf48-bg4gc_calico-system(9e4f5042-9b9e-42a4-bad4-8066f7c50d50)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2484e4d53fec5a5c61d4b105e412cfbb2afd455f0fbac934868c615f49f4191a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6dbd6bdf48-bg4gc" podUID="9e4f5042-9b9e-42a4-bad4-8066f7c50d50" Jan 24 12:00:58.041931 systemd[1]: run-netns-cni\x2da353a80f\x2dcb53\x2d4f36\x2d951c\x2d43c1fe6b190b.mount: Deactivated successfully. Jan 24 12:01:06.028786 containerd[1652]: time="2026-01-24T12:01:06.028707290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-576w8,Uid:3477849f-ef62-42dc-be46-c8edc5b93ccb,Namespace:calico-system,Attempt:0,}" Jan 24 12:01:06.158088 containerd[1652]: time="2026-01-24T12:01:06.157985124Z" level=error msg="Failed to destroy network for sandbox \"a0adb2c7398217407093a0d2b8a043c38cb97d6cd885aff0b7984dde010f0cda\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:01:06.163805 systemd[1]: run-netns-cni\x2d54daed15\x2dc459\x2db729\x2d92c3\x2d1ddd00e61b0a.mount: Deactivated successfully. Jan 24 12:01:06.168829 containerd[1652]: time="2026-01-24T12:01:06.168683468Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-576w8,Uid:3477849f-ef62-42dc-be46-c8edc5b93ccb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0adb2c7398217407093a0d2b8a043c38cb97d6cd885aff0b7984dde010f0cda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:01:06.169646 kubelet[2876]: E0124 12:01:06.169509 2876 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0adb2c7398217407093a0d2b8a043c38cb97d6cd885aff0b7984dde010f0cda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:01:06.170984 kubelet[2876]: E0124 12:01:06.170259 2876 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0adb2c7398217407093a0d2b8a043c38cb97d6cd885aff0b7984dde010f0cda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-576w8" Jan 24 12:01:06.170984 kubelet[2876]: E0124 12:01:06.170305 2876 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0adb2c7398217407093a0d2b8a043c38cb97d6cd885aff0b7984dde010f0cda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-576w8" Jan 24 12:01:06.170984 kubelet[2876]: E0124 12:01:06.170408 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-576w8_calico-system(3477849f-ef62-42dc-be46-c8edc5b93ccb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-576w8_calico-system(3477849f-ef62-42dc-be46-c8edc5b93ccb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a0adb2c7398217407093a0d2b8a043c38cb97d6cd885aff0b7984dde010f0cda\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-576w8" podUID="3477849f-ef62-42dc-be46-c8edc5b93ccb" Jan 24 12:01:06.616387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount496746780.mount: Deactivated successfully. Jan 24 12:01:06.900398 containerd[1652]: time="2026-01-24T12:01:06.900238257Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 12:01:06.935062 containerd[1652]: time="2026-01-24T12:01:06.934975181Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 24 12:01:06.937724 kubelet[2876]: E0124 12:01:06.937445 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:01:06.941183 containerd[1652]: time="2026-01-24T12:01:06.939501019Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 12:01:06.953664 containerd[1652]: time="2026-01-24T12:01:06.951712723Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 25.659725563s" Jan 24 12:01:06.953664 containerd[1652]: time="2026-01-24T12:01:06.951785791Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 24 12:01:06.953664 containerd[1652]: time="2026-01-24T12:01:06.951928001Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 12:01:06.955504 containerd[1652]: time="2026-01-24T12:01:06.955402661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9mzkh,Uid:5d0d046c-db5e-4542-a5a5-0466daa13e9a,Namespace:kube-system,Attempt:0,}" Jan 24 12:01:07.063084 containerd[1652]: time="2026-01-24T12:01:07.063023706Z" level=info msg="CreateContainer within sandbox \"159464e4aab2677242792e6aa7054bddd0c5ac366b3f20a84fb9f8fd5fcc4df2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 24 12:01:07.151069 containerd[1652]: time="2026-01-24T12:01:07.150466280Z" level=info msg="Container 18c4a5d25ad158aff5662d5d1bf8f7db97528c170541e0543331cdc6e360c26d: CDI devices from CRI Config.CDIDevices: []" Jan 24 12:01:07.205874 containerd[1652]: time="2026-01-24T12:01:07.205670440Z" level=info msg="CreateContainer within sandbox \"159464e4aab2677242792e6aa7054bddd0c5ac366b3f20a84fb9f8fd5fcc4df2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"18c4a5d25ad158aff5662d5d1bf8f7db97528c170541e0543331cdc6e360c26d\"" Jan 24 12:01:07.219979 containerd[1652]: time="2026-01-24T12:01:07.218339254Z" level=info msg="StartContainer for \"18c4a5d25ad158aff5662d5d1bf8f7db97528c170541e0543331cdc6e360c26d\"" Jan 24 12:01:07.227349 containerd[1652]: time="2026-01-24T12:01:07.227054891Z" level=info msg="connecting to shim 18c4a5d25ad158aff5662d5d1bf8f7db97528c170541e0543331cdc6e360c26d" address="unix:///run/containerd/s/2b563622b22c36b584dce485db145d220d76e670e91f6039b381126b7920b7cb" protocol=ttrpc version=3 Jan 24 12:01:07.247331 containerd[1652]: time="2026-01-24T12:01:07.245761520Z" level=error msg="Failed to destroy network for sandbox \"280f1e88f548e7f42b1bae1b956685335dc0de238831fdb8513ac716f5b2b0dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:01:07.251515 systemd[1]: run-netns-cni\x2d4675f798\x2dcf5b\x2d1f2d\x2df20f\x2dd7d2d77cb35c.mount: Deactivated successfully. Jan 24 12:01:07.259952 containerd[1652]: time="2026-01-24T12:01:07.258527787Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9mzkh,Uid:5d0d046c-db5e-4542-a5a5-0466daa13e9a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"280f1e88f548e7f42b1bae1b956685335dc0de238831fdb8513ac716f5b2b0dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:01:07.260135 kubelet[2876]: E0124 12:01:07.258920 2876 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"280f1e88f548e7f42b1bae1b956685335dc0de238831fdb8513ac716f5b2b0dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 12:01:07.260135 kubelet[2876]: E0124 12:01:07.258985 2876 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"280f1e88f548e7f42b1bae1b956685335dc0de238831fdb8513ac716f5b2b0dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-9mzkh" Jan 24 12:01:07.260135 kubelet[2876]: E0124 12:01:07.259012 2876 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"280f1e88f548e7f42b1bae1b956685335dc0de238831fdb8513ac716f5b2b0dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-9mzkh" Jan 24 12:01:07.260760 kubelet[2876]: E0124 12:01:07.259078 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-9mzkh_kube-system(5d0d046c-db5e-4542-a5a5-0466daa13e9a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-9mzkh_kube-system(5d0d046c-db5e-4542-a5a5-0466daa13e9a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"280f1e88f548e7f42b1bae1b956685335dc0de238831fdb8513ac716f5b2b0dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-9mzkh" podUID="5d0d046c-db5e-4542-a5a5-0466daa13e9a" Jan 24 12:01:07.583812 systemd[1]: Started cri-containerd-18c4a5d25ad158aff5662d5d1bf8f7db97528c170541e0543331cdc6e360c26d.scope - libcontainer container 18c4a5d25ad158aff5662d5d1bf8f7db97528c170541e0543331cdc6e360c26d. Jan 24 12:01:07.769998 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 24 12:01:07.770168 kernel: audit: type=1334 audit(1769256067.763:566): prog-id=170 op=LOAD Jan 24 12:01:07.763000 audit: BPF prog-id=170 op=LOAD Jan 24 12:01:07.763000 audit[4231]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3376 pid=4231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:07.791044 kernel: audit: type=1300 audit(1769256067.763:566): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3376 pid=4231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:07.794786 kernel: audit: type=1327 audit(1769256067.763:566): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138633461356432356164313538616666353636326435643162663866 Jan 24 12:01:07.763000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138633461356432356164313538616666353636326435643162663866 Jan 24 12:01:07.763000 audit: BPF prog-id=171 op=LOAD Jan 24 12:01:07.806961 kernel: audit: type=1334 audit(1769256067.763:567): prog-id=171 op=LOAD Jan 24 12:01:07.807048 kernel: audit: type=1300 audit(1769256067.763:567): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3376 pid=4231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:07.763000 audit[4231]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3376 pid=4231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:07.841784 kernel: audit: type=1327 audit(1769256067.763:567): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138633461356432356164313538616666353636326435643162663866 Jan 24 12:01:07.763000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138633461356432356164313538616666353636326435643162663866 Jan 24 12:01:07.763000 audit: BPF prog-id=171 op=UNLOAD Jan 24 12:01:07.763000 audit[4231]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3376 pid=4231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:07.869601 kernel: audit: type=1334 audit(1769256067.763:568): prog-id=171 op=UNLOAD Jan 24 12:01:07.869772 kernel: audit: type=1300 audit(1769256067.763:568): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3376 pid=4231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:07.763000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138633461356432356164313538616666353636326435643162663866 Jan 24 12:01:07.889895 kernel: audit: type=1327 audit(1769256067.763:568): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138633461356432356164313538616666353636326435643162663866 Jan 24 12:01:07.890024 kernel: audit: type=1334 audit(1769256067.763:569): prog-id=170 op=UNLOAD Jan 24 12:01:07.763000 audit: BPF prog-id=170 op=UNLOAD Jan 24 12:01:07.763000 audit[4231]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3376 pid=4231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:07.763000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138633461356432356164313538616666353636326435643162663866 Jan 24 12:01:07.763000 audit: BPF prog-id=172 op=LOAD Jan 24 12:01:07.763000 audit[4231]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=3376 pid=4231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:07.763000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138633461356432356164313538616666353636326435643162663866 Jan 24 12:01:08.034611 containerd[1652]: time="2026-01-24T12:01:08.034117934Z" level=info msg="StartContainer for \"18c4a5d25ad158aff5662d5d1bf8f7db97528c170541e0543331cdc6e360c26d\" returns successfully" Jan 24 12:01:08.402581 kubelet[2876]: E0124 12:01:08.391223 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:01:08.496234 kubelet[2876]: I0124 12:01:08.495860 2876 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vhvcr" podStartSLOduration=2.688549523 podStartE2EDuration="43.495812274s" podCreationTimestamp="2026-01-24 12:00:25 +0000 UTC" firstStartedPulling="2026-01-24 12:00:26.153903508 +0000 UTC m=+32.996115668" lastFinishedPulling="2026-01-24 12:01:06.961166259 +0000 UTC m=+73.803378419" observedRunningTime="2026-01-24 12:01:08.493015551 +0000 UTC m=+75.335227731" watchObservedRunningTime="2026-01-24 12:01:08.495812274 +0000 UTC m=+75.338024454" Jan 24 12:01:08.561139 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 24 12:01:08.561296 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 24 12:01:08.896002 kubelet[2876]: E0124 12:01:08.895930 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:01:08.897179 containerd[1652]: time="2026-01-24T12:01:08.897122688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xzzf9,Uid:729c35dc-3b15-46a8-9075-ed539b490113,Namespace:kube-system,Attempt:0,}" Jan 24 12:01:09.171514 kubelet[2876]: I0124 12:01:09.171291 2876 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7zqt\" (UniqueName: \"kubernetes.io/projected/9e4f5042-9b9e-42a4-bad4-8066f7c50d50-kube-api-access-z7zqt\") pod \"9e4f5042-9b9e-42a4-bad4-8066f7c50d50\" (UID: \"9e4f5042-9b9e-42a4-bad4-8066f7c50d50\") " Jan 24 12:01:09.171514 kubelet[2876]: I0124 12:01:09.171389 2876 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e4f5042-9b9e-42a4-bad4-8066f7c50d50-whisker-ca-bundle\") pod \"9e4f5042-9b9e-42a4-bad4-8066f7c50d50\" (UID: \"9e4f5042-9b9e-42a4-bad4-8066f7c50d50\") " Jan 24 12:01:09.171514 kubelet[2876]: I0124 12:01:09.171425 2876 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9e4f5042-9b9e-42a4-bad4-8066f7c50d50-whisker-backend-key-pair\") pod \"9e4f5042-9b9e-42a4-bad4-8066f7c50d50\" (UID: \"9e4f5042-9b9e-42a4-bad4-8066f7c50d50\") " Jan 24 12:01:09.175602 kubelet[2876]: I0124 12:01:09.174281 2876 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e4f5042-9b9e-42a4-bad4-8066f7c50d50-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "9e4f5042-9b9e-42a4-bad4-8066f7c50d50" (UID: "9e4f5042-9b9e-42a4-bad4-8066f7c50d50"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 24 12:01:09.199650 systemd[1]: var-lib-kubelet-pods-9e4f5042\x2d9b9e\x2d42a4\x2dbad4\x2d8066f7c50d50-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz7zqt.mount: Deactivated successfully. Jan 24 12:01:09.202456 kubelet[2876]: I0124 12:01:09.200525 2876 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e4f5042-9b9e-42a4-bad4-8066f7c50d50-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "9e4f5042-9b9e-42a4-bad4-8066f7c50d50" (UID: "9e4f5042-9b9e-42a4-bad4-8066f7c50d50"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 24 12:01:09.238081 kubelet[2876]: I0124 12:01:09.237985 2876 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e4f5042-9b9e-42a4-bad4-8066f7c50d50-kube-api-access-z7zqt" (OuterVolumeSpecName: "kube-api-access-z7zqt") pod "9e4f5042-9b9e-42a4-bad4-8066f7c50d50" (UID: "9e4f5042-9b9e-42a4-bad4-8066f7c50d50"). InnerVolumeSpecName "kube-api-access-z7zqt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 12:01:09.238161 systemd[1]: var-lib-kubelet-pods-9e4f5042\x2d9b9e\x2d42a4\x2dbad4\x2d8066f7c50d50-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 24 12:01:09.273203 kubelet[2876]: I0124 12:01:09.272936 2876 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z7zqt\" (UniqueName: \"kubernetes.io/projected/9e4f5042-9b9e-42a4-bad4-8066f7c50d50-kube-api-access-z7zqt\") on node \"localhost\" DevicePath \"\"" Jan 24 12:01:09.273203 kubelet[2876]: I0124 12:01:09.273005 2876 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e4f5042-9b9e-42a4-bad4-8066f7c50d50-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 24 12:01:09.273203 kubelet[2876]: I0124 12:01:09.273024 2876 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9e4f5042-9b9e-42a4-bad4-8066f7c50d50-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 24 12:01:09.401234 kubelet[2876]: E0124 12:01:09.400859 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:01:09.445837 systemd[1]: Removed slice kubepods-besteffort-pod9e4f5042_9b9e_42a4_bad4_8066f7c50d50.slice - libcontainer container kubepods-besteffort-pod9e4f5042_9b9e_42a4_bad4_8066f7c50d50.slice. Jan 24 12:01:09.990387 kubelet[2876]: I0124 12:01:09.990304 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ec18082a-49e1-4173-9b94-153e655a0861-whisker-backend-key-pair\") pod \"whisker-869c797fbb-hltgx\" (UID: \"ec18082a-49e1-4173-9b94-153e655a0861\") " pod="calico-system/whisker-869c797fbb-hltgx" Jan 24 12:01:09.991232 kubelet[2876]: I0124 12:01:09.990845 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgb5q\" (UniqueName: \"kubernetes.io/projected/ec18082a-49e1-4173-9b94-153e655a0861-kube-api-access-dgb5q\") pod \"whisker-869c797fbb-hltgx\" (UID: \"ec18082a-49e1-4173-9b94-153e655a0861\") " pod="calico-system/whisker-869c797fbb-hltgx" Jan 24 12:01:09.991232 kubelet[2876]: I0124 12:01:09.990909 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec18082a-49e1-4173-9b94-153e655a0861-whisker-ca-bundle\") pod \"whisker-869c797fbb-hltgx\" (UID: \"ec18082a-49e1-4173-9b94-153e655a0861\") " pod="calico-system/whisker-869c797fbb-hltgx" Jan 24 12:01:09.995001 containerd[1652]: time="2026-01-24T12:01:09.994489801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-k2xcd,Uid:f944553c-3de6-4dea-af30-6e177d6839ad,Namespace:calico-system,Attempt:0,}" Jan 24 12:01:09.999945 kubelet[2876]: I0124 12:01:09.999914 2876 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e4f5042-9b9e-42a4-bad4-8066f7c50d50" path="/var/lib/kubelet/pods/9e4f5042-9b9e-42a4-bad4-8066f7c50d50/volumes" Jan 24 12:01:10.003493 systemd[1]: Created slice kubepods-besteffort-podec18082a_49e1_4173_9b94_153e655a0861.slice - libcontainer container kubepods-besteffort-podec18082a_49e1_4173_9b94_153e655a0861.slice. Jan 24 12:01:10.308144 systemd-networkd[1510]: cali6fe78416101: Link UP Jan 24 12:01:10.309047 systemd-networkd[1510]: cali6fe78416101: Gained carrier Jan 24 12:01:10.350676 containerd[1652]: time="2026-01-24T12:01:10.348352579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-869c797fbb-hltgx,Uid:ec18082a-49e1-4173-9b94-153e655a0861,Namespace:calico-system,Attempt:0,}" Jan 24 12:01:10.399236 containerd[1652]: 2026-01-24 12:01:08.981 [INFO][4300] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 12:01:10.399236 containerd[1652]: 2026-01-24 12:01:09.147 [INFO][4300] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--xzzf9-eth0 coredns-66bc5c9577- kube-system 729c35dc-3b15-46a8-9075-ed539b490113 947 0 2026-01-24 11:59:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-xzzf9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6fe78416101 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="435e9a102efcc0b0cfa3de47a38341a808693d2606950a8915ca37425ea83b8d" Namespace="kube-system" Pod="coredns-66bc5c9577-xzzf9" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xzzf9-" Jan 24 12:01:10.399236 containerd[1652]: 2026-01-24 12:01:09.147 [INFO][4300] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="435e9a102efcc0b0cfa3de47a38341a808693d2606950a8915ca37425ea83b8d" Namespace="kube-system" Pod="coredns-66bc5c9577-xzzf9" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xzzf9-eth0" Jan 24 12:01:10.399236 containerd[1652]: 2026-01-24 12:01:09.985 [INFO][4322] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="435e9a102efcc0b0cfa3de47a38341a808693d2606950a8915ca37425ea83b8d" HandleID="k8s-pod-network.435e9a102efcc0b0cfa3de47a38341a808693d2606950a8915ca37425ea83b8d" Workload="localhost-k8s-coredns--66bc5c9577--xzzf9-eth0" Jan 24 12:01:10.399484 containerd[1652]: 2026-01-24 12:01:09.994 [INFO][4322] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="435e9a102efcc0b0cfa3de47a38341a808693d2606950a8915ca37425ea83b8d" HandleID="k8s-pod-network.435e9a102efcc0b0cfa3de47a38341a808693d2606950a8915ca37425ea83b8d" Workload="localhost-k8s-coredns--66bc5c9577--xzzf9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011a500), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-xzzf9", "timestamp":"2026-01-24 12:01:09.985947619 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 12:01:10.399484 containerd[1652]: 2026-01-24 12:01:10.000 [INFO][4322] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 12:01:10.399484 containerd[1652]: 2026-01-24 12:01:10.006 [INFO][4322] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 12:01:10.399484 containerd[1652]: 2026-01-24 12:01:10.006 [INFO][4322] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 12:01:10.399484 containerd[1652]: 2026-01-24 12:01:10.078 [INFO][4322] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.435e9a102efcc0b0cfa3de47a38341a808693d2606950a8915ca37425ea83b8d" host="localhost" Jan 24 12:01:10.399484 containerd[1652]: 2026-01-24 12:01:10.101 [INFO][4322] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 12:01:10.399484 containerd[1652]: 2026-01-24 12:01:10.147 [INFO][4322] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 12:01:10.399484 containerd[1652]: 2026-01-24 12:01:10.152 [INFO][4322] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 12:01:10.399484 containerd[1652]: 2026-01-24 12:01:10.172 [INFO][4322] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 12:01:10.399484 containerd[1652]: 2026-01-24 12:01:10.173 [INFO][4322] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.435e9a102efcc0b0cfa3de47a38341a808693d2606950a8915ca37425ea83b8d" host="localhost" Jan 24 12:01:10.399944 containerd[1652]: 2026-01-24 12:01:10.182 [INFO][4322] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.435e9a102efcc0b0cfa3de47a38341a808693d2606950a8915ca37425ea83b8d Jan 24 12:01:10.399944 containerd[1652]: 2026-01-24 12:01:10.238 [INFO][4322] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.435e9a102efcc0b0cfa3de47a38341a808693d2606950a8915ca37425ea83b8d" host="localhost" Jan 24 12:01:10.399944 containerd[1652]: 2026-01-24 12:01:10.270 [INFO][4322] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.435e9a102efcc0b0cfa3de47a38341a808693d2606950a8915ca37425ea83b8d" host="localhost" Jan 24 12:01:10.399944 containerd[1652]: 2026-01-24 12:01:10.270 [INFO][4322] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.435e9a102efcc0b0cfa3de47a38341a808693d2606950a8915ca37425ea83b8d" host="localhost" Jan 24 12:01:10.399944 containerd[1652]: 2026-01-24 12:01:10.270 [INFO][4322] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 12:01:10.399944 containerd[1652]: 2026-01-24 12:01:10.270 [INFO][4322] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="435e9a102efcc0b0cfa3de47a38341a808693d2606950a8915ca37425ea83b8d" HandleID="k8s-pod-network.435e9a102efcc0b0cfa3de47a38341a808693d2606950a8915ca37425ea83b8d" Workload="localhost-k8s-coredns--66bc5c9577--xzzf9-eth0" Jan 24 12:01:10.400124 containerd[1652]: 2026-01-24 12:01:10.280 [INFO][4300] cni-plugin/k8s.go 418: Populated endpoint ContainerID="435e9a102efcc0b0cfa3de47a38341a808693d2606950a8915ca37425ea83b8d" Namespace="kube-system" Pod="coredns-66bc5c9577-xzzf9" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xzzf9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--xzzf9-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"729c35dc-3b15-46a8-9075-ed539b490113", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 11, 59, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-xzzf9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6fe78416101", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 12:01:10.400124 containerd[1652]: 2026-01-24 12:01:10.280 [INFO][4300] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="435e9a102efcc0b0cfa3de47a38341a808693d2606950a8915ca37425ea83b8d" Namespace="kube-system" Pod="coredns-66bc5c9577-xzzf9" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xzzf9-eth0" Jan 24 12:01:10.400124 containerd[1652]: 2026-01-24 12:01:10.280 [INFO][4300] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6fe78416101 ContainerID="435e9a102efcc0b0cfa3de47a38341a808693d2606950a8915ca37425ea83b8d" Namespace="kube-system" Pod="coredns-66bc5c9577-xzzf9" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xzzf9-eth0" Jan 24 12:01:10.400124 containerd[1652]: 2026-01-24 12:01:10.314 [INFO][4300] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="435e9a102efcc0b0cfa3de47a38341a808693d2606950a8915ca37425ea83b8d" Namespace="kube-system" Pod="coredns-66bc5c9577-xzzf9" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xzzf9-eth0" Jan 24 12:01:10.400124 containerd[1652]: 2026-01-24 12:01:10.318 [INFO][4300] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="435e9a102efcc0b0cfa3de47a38341a808693d2606950a8915ca37425ea83b8d" Namespace="kube-system" Pod="coredns-66bc5c9577-xzzf9" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xzzf9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--xzzf9-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"729c35dc-3b15-46a8-9075-ed539b490113", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 11, 59, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"435e9a102efcc0b0cfa3de47a38341a808693d2606950a8915ca37425ea83b8d", Pod:"coredns-66bc5c9577-xzzf9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6fe78416101", MAC:"5a:13:69:f4:e8:c2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 12:01:10.400124 containerd[1652]: 2026-01-24 12:01:10.373 [INFO][4300] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="435e9a102efcc0b0cfa3de47a38341a808693d2606950a8915ca37425ea83b8d" Namespace="kube-system" Pod="coredns-66bc5c9577-xzzf9" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xzzf9-eth0" Jan 24 12:01:10.575969 systemd-networkd[1510]: calibeebf379367: Link UP Jan 24 12:01:10.577160 systemd-networkd[1510]: calibeebf379367: Gained carrier Jan 24 12:01:10.589437 containerd[1652]: time="2026-01-24T12:01:10.588226570Z" level=info msg="connecting to shim 435e9a102efcc0b0cfa3de47a38341a808693d2606950a8915ca37425ea83b8d" address="unix:///run/containerd/s/d43e548ba0a6bdb2fb6586f3406a4dadfb1aec34bfd8dba564db49a578b88203" namespace=k8s.io protocol=ttrpc version=3 Jan 24 12:01:10.657257 containerd[1652]: 2026-01-24 12:01:10.134 [INFO][4355] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 12:01:10.657257 containerd[1652]: 2026-01-24 12:01:10.181 [INFO][4355] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--k2xcd-eth0 goldmane-7c778bb748- calico-system f944553c-3de6-4dea-af30-6e177d6839ad 952 0 2026-01-24 12:00:22 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-k2xcd eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calibeebf379367 [] [] }} ContainerID="5b393719689f57dd41049fd79258df3d219a63bfdb46b51b9a140646f34bd520" Namespace="calico-system" Pod="goldmane-7c778bb748-k2xcd" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--k2xcd-" Jan 24 12:01:10.657257 containerd[1652]: 2026-01-24 12:01:10.181 [INFO][4355] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5b393719689f57dd41049fd79258df3d219a63bfdb46b51b9a140646f34bd520" Namespace="calico-system" Pod="goldmane-7c778bb748-k2xcd" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--k2xcd-eth0" Jan 24 12:01:10.657257 containerd[1652]: 2026-01-24 12:01:10.307 [INFO][4384] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5b393719689f57dd41049fd79258df3d219a63bfdb46b51b9a140646f34bd520" HandleID="k8s-pod-network.5b393719689f57dd41049fd79258df3d219a63bfdb46b51b9a140646f34bd520" Workload="localhost-k8s-goldmane--7c778bb748--k2xcd-eth0" Jan 24 12:01:10.657257 containerd[1652]: 2026-01-24 12:01:10.312 [INFO][4384] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5b393719689f57dd41049fd79258df3d219a63bfdb46b51b9a140646f34bd520" HandleID="k8s-pod-network.5b393719689f57dd41049fd79258df3d219a63bfdb46b51b9a140646f34bd520" Workload="localhost-k8s-goldmane--7c778bb748--k2xcd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035fba0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-k2xcd", "timestamp":"2026-01-24 12:01:10.307489775 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 12:01:10.657257 containerd[1652]: 2026-01-24 12:01:10.312 [INFO][4384] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 12:01:10.657257 containerd[1652]: 2026-01-24 12:01:10.313 [INFO][4384] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 12:01:10.657257 containerd[1652]: 2026-01-24 12:01:10.313 [INFO][4384] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 12:01:10.657257 containerd[1652]: 2026-01-24 12:01:10.370 [INFO][4384] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5b393719689f57dd41049fd79258df3d219a63bfdb46b51b9a140646f34bd520" host="localhost" Jan 24 12:01:10.657257 containerd[1652]: 2026-01-24 12:01:10.401 [INFO][4384] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 12:01:10.657257 containerd[1652]: 2026-01-24 12:01:10.457 [INFO][4384] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 12:01:10.657257 containerd[1652]: 2026-01-24 12:01:10.467 [INFO][4384] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 12:01:10.657257 containerd[1652]: 2026-01-24 12:01:10.483 [INFO][4384] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 12:01:10.657257 containerd[1652]: 2026-01-24 12:01:10.484 [INFO][4384] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5b393719689f57dd41049fd79258df3d219a63bfdb46b51b9a140646f34bd520" host="localhost" Jan 24 12:01:10.657257 containerd[1652]: 2026-01-24 12:01:10.489 [INFO][4384] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5b393719689f57dd41049fd79258df3d219a63bfdb46b51b9a140646f34bd520 Jan 24 12:01:10.657257 containerd[1652]: 2026-01-24 12:01:10.528 [INFO][4384] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5b393719689f57dd41049fd79258df3d219a63bfdb46b51b9a140646f34bd520" host="localhost" Jan 24 12:01:10.657257 containerd[1652]: 2026-01-24 12:01:10.553 [INFO][4384] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.5b393719689f57dd41049fd79258df3d219a63bfdb46b51b9a140646f34bd520" host="localhost" Jan 24 12:01:10.657257 containerd[1652]: 2026-01-24 12:01:10.553 [INFO][4384] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.5b393719689f57dd41049fd79258df3d219a63bfdb46b51b9a140646f34bd520" host="localhost" Jan 24 12:01:10.657257 containerd[1652]: 2026-01-24 12:01:10.553 [INFO][4384] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 12:01:10.657257 containerd[1652]: 2026-01-24 12:01:10.553 [INFO][4384] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="5b393719689f57dd41049fd79258df3d219a63bfdb46b51b9a140646f34bd520" HandleID="k8s-pod-network.5b393719689f57dd41049fd79258df3d219a63bfdb46b51b9a140646f34bd520" Workload="localhost-k8s-goldmane--7c778bb748--k2xcd-eth0" Jan 24 12:01:10.658525 containerd[1652]: 2026-01-24 12:01:10.568 [INFO][4355] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5b393719689f57dd41049fd79258df3d219a63bfdb46b51b9a140646f34bd520" Namespace="calico-system" Pod="goldmane-7c778bb748-k2xcd" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--k2xcd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--k2xcd-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"f944553c-3de6-4dea-af30-6e177d6839ad", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 12, 0, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-k2xcd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibeebf379367", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 12:01:10.658525 containerd[1652]: 2026-01-24 12:01:10.569 [INFO][4355] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="5b393719689f57dd41049fd79258df3d219a63bfdb46b51b9a140646f34bd520" Namespace="calico-system" Pod="goldmane-7c778bb748-k2xcd" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--k2xcd-eth0" Jan 24 12:01:10.658525 containerd[1652]: 2026-01-24 12:01:10.569 [INFO][4355] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibeebf379367 ContainerID="5b393719689f57dd41049fd79258df3d219a63bfdb46b51b9a140646f34bd520" Namespace="calico-system" Pod="goldmane-7c778bb748-k2xcd" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--k2xcd-eth0" Jan 24 12:01:10.658525 containerd[1652]: 2026-01-24 12:01:10.574 [INFO][4355] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5b393719689f57dd41049fd79258df3d219a63bfdb46b51b9a140646f34bd520" Namespace="calico-system" Pod="goldmane-7c778bb748-k2xcd" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--k2xcd-eth0" Jan 24 12:01:10.658525 containerd[1652]: 2026-01-24 12:01:10.580 [INFO][4355] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5b393719689f57dd41049fd79258df3d219a63bfdb46b51b9a140646f34bd520" Namespace="calico-system" Pod="goldmane-7c778bb748-k2xcd" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--k2xcd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--k2xcd-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"f944553c-3de6-4dea-af30-6e177d6839ad", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 12, 0, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5b393719689f57dd41049fd79258df3d219a63bfdb46b51b9a140646f34bd520", Pod:"goldmane-7c778bb748-k2xcd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibeebf379367", MAC:"82:25:49:fa:e8:74", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 12:01:10.658525 containerd[1652]: 2026-01-24 12:01:10.649 [INFO][4355] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5b393719689f57dd41049fd79258df3d219a63bfdb46b51b9a140646f34bd520" Namespace="calico-system" Pod="goldmane-7c778bb748-k2xcd" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--k2xcd-eth0" Jan 24 12:01:10.684974 systemd[1]: Started cri-containerd-435e9a102efcc0b0cfa3de47a38341a808693d2606950a8915ca37425ea83b8d.scope - libcontainer container 435e9a102efcc0b0cfa3de47a38341a808693d2606950a8915ca37425ea83b8d. Jan 24 12:01:10.769386 containerd[1652]: time="2026-01-24T12:01:10.769287627Z" level=info msg="connecting to shim 5b393719689f57dd41049fd79258df3d219a63bfdb46b51b9a140646f34bd520" address="unix:///run/containerd/s/c5c993b22d1f8e2a12970bad852916ead6b74ea23ee3961b67a0169150b26db0" namespace=k8s.io protocol=ttrpc version=3 Jan 24 12:01:10.770000 audit: BPF prog-id=173 op=LOAD Jan 24 12:01:10.772000 audit: BPF prog-id=174 op=LOAD Jan 24 12:01:10.772000 audit[4435]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4424 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:10.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433356539613130326566636330623063666133646534376133383334 Jan 24 12:01:10.772000 audit: BPF prog-id=174 op=UNLOAD Jan 24 12:01:10.772000 audit[4435]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4424 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:10.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433356539613130326566636330623063666133646534376133383334 Jan 24 12:01:10.773000 audit: BPF prog-id=175 op=LOAD Jan 24 12:01:10.773000 audit[4435]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4424 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:10.773000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433356539613130326566636330623063666133646534376133383334 Jan 24 12:01:10.776000 audit: BPF prog-id=176 op=LOAD Jan 24 12:01:10.776000 audit[4435]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4424 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:10.776000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433356539613130326566636330623063666133646534376133383334 Jan 24 12:01:10.776000 audit: BPF prog-id=176 op=UNLOAD Jan 24 12:01:10.776000 audit[4435]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4424 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:10.776000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433356539613130326566636330623063666133646534376133383334 Jan 24 12:01:10.776000 audit: BPF prog-id=175 op=UNLOAD Jan 24 12:01:10.776000 audit[4435]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4424 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:10.776000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433356539613130326566636330623063666133646534376133383334 Jan 24 12:01:10.776000 audit: BPF prog-id=177 op=LOAD Jan 24 12:01:10.776000 audit[4435]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4424 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:10.776000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433356539613130326566636330623063666133646534376133383334 Jan 24 12:01:10.779971 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 12:01:10.849924 systemd[1]: Started cri-containerd-5b393719689f57dd41049fd79258df3d219a63bfdb46b51b9a140646f34bd520.scope - libcontainer container 5b393719689f57dd41049fd79258df3d219a63bfdb46b51b9a140646f34bd520. Jan 24 12:01:10.882283 containerd[1652]: time="2026-01-24T12:01:10.881991501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xzzf9,Uid:729c35dc-3b15-46a8-9075-ed539b490113,Namespace:kube-system,Attempt:0,} returns sandbox id \"435e9a102efcc0b0cfa3de47a38341a808693d2606950a8915ca37425ea83b8d\"" Jan 24 12:01:10.890034 kubelet[2876]: E0124 12:01:10.889968 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:01:10.913904 containerd[1652]: time="2026-01-24T12:01:10.906728852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76997bfb4b-ggwxc,Uid:543ea964-5bd2-4a2a-be7e-5b64397ea1f6,Namespace:calico-apiserver,Attempt:0,}" Jan 24 12:01:10.929849 containerd[1652]: time="2026-01-24T12:01:10.929798736Z" level=info msg="CreateContainer within sandbox \"435e9a102efcc0b0cfa3de47a38341a808693d2606950a8915ca37425ea83b8d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 12:01:10.930000 audit: BPF prog-id=178 op=LOAD Jan 24 12:01:10.932000 audit: BPF prog-id=179 op=LOAD Jan 24 12:01:10.932000 audit[4487]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4475 pid=4487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:10.932000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562333933373139363839663537646434313034396664373932353864 Jan 24 12:01:10.932000 audit: BPF prog-id=179 op=UNLOAD Jan 24 12:01:10.932000 audit[4487]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4475 pid=4487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:10.932000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562333933373139363839663537646434313034396664373932353864 Jan 24 12:01:10.933000 audit: BPF prog-id=180 op=LOAD Jan 24 12:01:10.933000 audit[4487]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4475 pid=4487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:10.933000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562333933373139363839663537646434313034396664373932353864 Jan 24 12:01:10.933000 audit: BPF prog-id=181 op=LOAD Jan 24 12:01:10.933000 audit[4487]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4475 pid=4487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:10.933000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562333933373139363839663537646434313034396664373932353864 Jan 24 12:01:10.933000 audit: BPF prog-id=181 op=UNLOAD Jan 24 12:01:10.933000 audit[4487]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4475 pid=4487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:10.933000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562333933373139363839663537646434313034396664373932353864 Jan 24 12:01:10.933000 audit: BPF prog-id=180 op=UNLOAD Jan 24 12:01:10.933000 audit[4487]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4475 pid=4487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:10.933000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562333933373139363839663537646434313034396664373932353864 Jan 24 12:01:10.933000 audit: BPF prog-id=182 op=LOAD Jan 24 12:01:10.933000 audit[4487]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4475 pid=4487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:10.933000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562333933373139363839663537646434313034396664373932353864 Jan 24 12:01:10.944781 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 12:01:11.005502 systemd-networkd[1510]: calic46019496b0: Link UP Jan 24 12:01:11.006840 containerd[1652]: time="2026-01-24T12:01:11.006165944Z" level=info msg="Container 38e0c023fcf15b25f86c6b1d00dfad7ed705c5d8f1b3156cdc9c4f0080271db5: CDI devices from CRI Config.CDIDevices: []" Jan 24 12:01:11.017150 systemd-networkd[1510]: calic46019496b0: Gained carrier Jan 24 12:01:11.088601 containerd[1652]: time="2026-01-24T12:01:11.086675509Z" level=info msg="CreateContainer within sandbox \"435e9a102efcc0b0cfa3de47a38341a808693d2606950a8915ca37425ea83b8d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"38e0c023fcf15b25f86c6b1d00dfad7ed705c5d8f1b3156cdc9c4f0080271db5\"" Jan 24 12:01:11.091057 containerd[1652]: time="2026-01-24T12:01:11.090475689Z" level=info msg="StartContainer for \"38e0c023fcf15b25f86c6b1d00dfad7ed705c5d8f1b3156cdc9c4f0080271db5\"" Jan 24 12:01:11.129103 containerd[1652]: time="2026-01-24T12:01:11.128987362Z" level=info msg="connecting to shim 38e0c023fcf15b25f86c6b1d00dfad7ed705c5d8f1b3156cdc9c4f0080271db5" address="unix:///run/containerd/s/d43e548ba0a6bdb2fb6586f3406a4dadfb1aec34bfd8dba564db49a578b88203" protocol=ttrpc version=3 Jan 24 12:01:11.163610 containerd[1652]: 2026-01-24 12:01:10.490 [INFO][4401] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 12:01:11.163610 containerd[1652]: 2026-01-24 12:01:10.588 [INFO][4401] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--869c797fbb--hltgx-eth0 whisker-869c797fbb- calico-system ec18082a-49e1-4173-9b94-153e655a0861 1082 0 2026-01-24 12:01:09 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:869c797fbb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-869c797fbb-hltgx eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic46019496b0 [] [] }} ContainerID="2a13c64a05a358f241d2b0475b7698387c73a464cad7c9e2fcb2ed5fc985effb" Namespace="calico-system" Pod="whisker-869c797fbb-hltgx" WorkloadEndpoint="localhost-k8s-whisker--869c797fbb--hltgx-" Jan 24 12:01:11.163610 containerd[1652]: 2026-01-24 12:01:10.589 [INFO][4401] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2a13c64a05a358f241d2b0475b7698387c73a464cad7c9e2fcb2ed5fc985effb" Namespace="calico-system" Pod="whisker-869c797fbb-hltgx" WorkloadEndpoint="localhost-k8s-whisker--869c797fbb--hltgx-eth0" Jan 24 12:01:11.163610 containerd[1652]: 2026-01-24 12:01:10.710 [INFO][4438] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2a13c64a05a358f241d2b0475b7698387c73a464cad7c9e2fcb2ed5fc985effb" HandleID="k8s-pod-network.2a13c64a05a358f241d2b0475b7698387c73a464cad7c9e2fcb2ed5fc985effb" Workload="localhost-k8s-whisker--869c797fbb--hltgx-eth0" Jan 24 12:01:11.163610 containerd[1652]: 2026-01-24 12:01:10.711 [INFO][4438] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2a13c64a05a358f241d2b0475b7698387c73a464cad7c9e2fcb2ed5fc985effb" HandleID="k8s-pod-network.2a13c64a05a358f241d2b0475b7698387c73a464cad7c9e2fcb2ed5fc985effb" Workload="localhost-k8s-whisker--869c797fbb--hltgx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f7c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-869c797fbb-hltgx", "timestamp":"2026-01-24 12:01:10.710962561 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 12:01:11.163610 containerd[1652]: 2026-01-24 12:01:10.711 [INFO][4438] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 12:01:11.163610 containerd[1652]: 2026-01-24 12:01:10.711 [INFO][4438] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 12:01:11.163610 containerd[1652]: 2026-01-24 12:01:10.712 [INFO][4438] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 12:01:11.163610 containerd[1652]: 2026-01-24 12:01:10.737 [INFO][4438] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2a13c64a05a358f241d2b0475b7698387c73a464cad7c9e2fcb2ed5fc985effb" host="localhost" Jan 24 12:01:11.163610 containerd[1652]: 2026-01-24 12:01:10.763 [INFO][4438] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 12:01:11.163610 containerd[1652]: 2026-01-24 12:01:10.793 [INFO][4438] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 12:01:11.163610 containerd[1652]: 2026-01-24 12:01:10.811 [INFO][4438] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 12:01:11.163610 containerd[1652]: 2026-01-24 12:01:10.835 [INFO][4438] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 12:01:11.163610 containerd[1652]: 2026-01-24 12:01:10.837 [INFO][4438] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2a13c64a05a358f241d2b0475b7698387c73a464cad7c9e2fcb2ed5fc985effb" host="localhost" Jan 24 12:01:11.163610 containerd[1652]: 2026-01-24 12:01:10.856 [INFO][4438] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2a13c64a05a358f241d2b0475b7698387c73a464cad7c9e2fcb2ed5fc985effb Jan 24 12:01:11.163610 containerd[1652]: 2026-01-24 12:01:10.885 [INFO][4438] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2a13c64a05a358f241d2b0475b7698387c73a464cad7c9e2fcb2ed5fc985effb" host="localhost" Jan 24 12:01:11.163610 containerd[1652]: 2026-01-24 12:01:10.955 [INFO][4438] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.2a13c64a05a358f241d2b0475b7698387c73a464cad7c9e2fcb2ed5fc985effb" host="localhost" Jan 24 12:01:11.163610 containerd[1652]: 2026-01-24 12:01:10.955 [INFO][4438] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.2a13c64a05a358f241d2b0475b7698387c73a464cad7c9e2fcb2ed5fc985effb" host="localhost" Jan 24 12:01:11.163610 containerd[1652]: 2026-01-24 12:01:10.955 [INFO][4438] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 12:01:11.163610 containerd[1652]: 2026-01-24 12:01:10.956 [INFO][4438] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="2a13c64a05a358f241d2b0475b7698387c73a464cad7c9e2fcb2ed5fc985effb" HandleID="k8s-pod-network.2a13c64a05a358f241d2b0475b7698387c73a464cad7c9e2fcb2ed5fc985effb" Workload="localhost-k8s-whisker--869c797fbb--hltgx-eth0" Jan 24 12:01:11.164660 containerd[1652]: 2026-01-24 12:01:10.983 [INFO][4401] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2a13c64a05a358f241d2b0475b7698387c73a464cad7c9e2fcb2ed5fc985effb" Namespace="calico-system" Pod="whisker-869c797fbb-hltgx" WorkloadEndpoint="localhost-k8s-whisker--869c797fbb--hltgx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--869c797fbb--hltgx-eth0", GenerateName:"whisker-869c797fbb-", Namespace:"calico-system", SelfLink:"", UID:"ec18082a-49e1-4173-9b94-153e655a0861", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 12, 1, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"869c797fbb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-869c797fbb-hltgx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic46019496b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 12:01:11.164660 containerd[1652]: 2026-01-24 12:01:10.983 [INFO][4401] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="2a13c64a05a358f241d2b0475b7698387c73a464cad7c9e2fcb2ed5fc985effb" Namespace="calico-system" Pod="whisker-869c797fbb-hltgx" WorkloadEndpoint="localhost-k8s-whisker--869c797fbb--hltgx-eth0" Jan 24 12:01:11.164660 containerd[1652]: 2026-01-24 12:01:10.983 [INFO][4401] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic46019496b0 ContainerID="2a13c64a05a358f241d2b0475b7698387c73a464cad7c9e2fcb2ed5fc985effb" Namespace="calico-system" Pod="whisker-869c797fbb-hltgx" WorkloadEndpoint="localhost-k8s-whisker--869c797fbb--hltgx-eth0" Jan 24 12:01:11.164660 containerd[1652]: 2026-01-24 12:01:11.030 [INFO][4401] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2a13c64a05a358f241d2b0475b7698387c73a464cad7c9e2fcb2ed5fc985effb" Namespace="calico-system" Pod="whisker-869c797fbb-hltgx" WorkloadEndpoint="localhost-k8s-whisker--869c797fbb--hltgx-eth0" Jan 24 12:01:11.164660 containerd[1652]: 2026-01-24 12:01:11.039 [INFO][4401] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2a13c64a05a358f241d2b0475b7698387c73a464cad7c9e2fcb2ed5fc985effb" Namespace="calico-system" Pod="whisker-869c797fbb-hltgx" WorkloadEndpoint="localhost-k8s-whisker--869c797fbb--hltgx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--869c797fbb--hltgx-eth0", GenerateName:"whisker-869c797fbb-", Namespace:"calico-system", SelfLink:"", UID:"ec18082a-49e1-4173-9b94-153e655a0861", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 12, 1, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"869c797fbb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2a13c64a05a358f241d2b0475b7698387c73a464cad7c9e2fcb2ed5fc985effb", Pod:"whisker-869c797fbb-hltgx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic46019496b0", MAC:"7e:f4:51:6b:90:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 12:01:11.164660 containerd[1652]: 2026-01-24 12:01:11.101 [INFO][4401] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2a13c64a05a358f241d2b0475b7698387c73a464cad7c9e2fcb2ed5fc985effb" Namespace="calico-system" Pod="whisker-869c797fbb-hltgx" WorkloadEndpoint="localhost-k8s-whisker--869c797fbb--hltgx-eth0" Jan 24 12:01:11.198985 systemd[1]: Started cri-containerd-38e0c023fcf15b25f86c6b1d00dfad7ed705c5d8f1b3156cdc9c4f0080271db5.scope - libcontainer container 38e0c023fcf15b25f86c6b1d00dfad7ed705c5d8f1b3156cdc9c4f0080271db5. Jan 24 12:01:11.366000 audit: BPF prog-id=183 op=LOAD Jan 24 12:01:11.372000 audit: BPF prog-id=184 op=LOAD Jan 24 12:01:11.372000 audit[4532]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4424 pid=4532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:11.372000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338653063303233666366313562323566383663366231643030646661 Jan 24 12:01:11.372000 audit: BPF prog-id=184 op=UNLOAD Jan 24 12:01:11.372000 audit[4532]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4424 pid=4532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:11.372000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338653063303233666366313562323566383663366231643030646661 Jan 24 12:01:11.373990 containerd[1652]: time="2026-01-24T12:01:11.373756473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-k2xcd,Uid:f944553c-3de6-4dea-af30-6e177d6839ad,Namespace:calico-system,Attempt:0,} returns sandbox id \"5b393719689f57dd41049fd79258df3d219a63bfdb46b51b9a140646f34bd520\"" Jan 24 12:01:11.373000 audit: BPF prog-id=185 op=LOAD Jan 24 12:01:11.373000 audit[4532]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4424 pid=4532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:11.373000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338653063303233666366313562323566383663366231643030646661 Jan 24 12:01:11.373000 audit: BPF prog-id=186 op=LOAD Jan 24 12:01:11.373000 audit[4532]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4424 pid=4532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:11.373000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338653063303233666366313562323566383663366231643030646661 Jan 24 12:01:11.373000 audit: BPF prog-id=186 op=UNLOAD Jan 24 12:01:11.373000 audit[4532]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4424 pid=4532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:11.373000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338653063303233666366313562323566383663366231643030646661 Jan 24 12:01:11.373000 audit: BPF prog-id=185 op=UNLOAD Jan 24 12:01:11.373000 audit[4532]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4424 pid=4532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:11.373000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338653063303233666366313562323566383663366231643030646661 Jan 24 12:01:11.373000 audit: BPF prog-id=187 op=LOAD Jan 24 12:01:11.373000 audit[4532]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4424 pid=4532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:11.373000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338653063303233666366313562323566383663366231643030646661 Jan 24 12:01:11.396490 containerd[1652]: time="2026-01-24T12:01:11.396313917Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 12:01:11.456627 containerd[1652]: time="2026-01-24T12:01:11.454745788Z" level=info msg="connecting to shim 2a13c64a05a358f241d2b0475b7698387c73a464cad7c9e2fcb2ed5fc985effb" address="unix:///run/containerd/s/be399e8a35170c690b18cf7d9af75c511e0ee09b896163fad745ec80e7f26910" namespace=k8s.io protocol=ttrpc version=3 Jan 24 12:01:11.479220 containerd[1652]: time="2026-01-24T12:01:11.479119829Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:01:11.491353 containerd[1652]: time="2026-01-24T12:01:11.491234521Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 12:01:11.491721 containerd[1652]: time="2026-01-24T12:01:11.491394424Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 24 12:01:11.493118 kubelet[2876]: E0124 12:01:11.492068 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 12:01:11.493118 kubelet[2876]: E0124 12:01:11.492297 2876 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 12:01:11.496515 kubelet[2876]: E0124 12:01:11.496131 2876 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-k2xcd_calico-system(f944553c-3de6-4dea-af30-6e177d6839ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 12:01:11.496515 kubelet[2876]: E0124 12:01:11.496194 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-k2xcd" podUID="f944553c-3de6-4dea-af30-6e177d6839ad" Jan 24 12:01:11.646922 systemd[1]: Started cri-containerd-2a13c64a05a358f241d2b0475b7698387c73a464cad7c9e2fcb2ed5fc985effb.scope - libcontainer container 2a13c64a05a358f241d2b0475b7698387c73a464cad7c9e2fcb2ed5fc985effb. Jan 24 12:01:11.696618 containerd[1652]: time="2026-01-24T12:01:11.685170489Z" level=info msg="StartContainer for \"38e0c023fcf15b25f86c6b1d00dfad7ed705c5d8f1b3156cdc9c4f0080271db5\" returns successfully" Jan 24 12:01:11.808118 systemd-networkd[1510]: cali45840fec461: Link UP Jan 24 12:01:11.829272 systemd-networkd[1510]: cali45840fec461: Gained carrier Jan 24 12:01:11.844000 audit: BPF prog-id=188 op=LOAD Jan 24 12:01:11.852000 audit: BPF prog-id=189 op=LOAD Jan 24 12:01:11.852000 audit[4641]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00016e238 a2=98 a3=0 items=0 ppid=4619 pid=4641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:11.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261313363363461303561333538663234316432623034373562373639 Jan 24 12:01:11.852000 audit: BPF prog-id=189 op=UNLOAD Jan 24 12:01:11.852000 audit[4641]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4619 pid=4641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:11.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261313363363461303561333538663234316432623034373562373639 Jan 24 12:01:11.853000 audit: BPF prog-id=190 op=LOAD Jan 24 12:01:11.853000 audit[4641]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00016e488 a2=98 a3=0 items=0 ppid=4619 pid=4641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:11.853000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261313363363461303561333538663234316432623034373562373639 Jan 24 12:01:11.853000 audit: BPF prog-id=191 op=LOAD Jan 24 12:01:11.853000 audit[4641]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00016e218 a2=98 a3=0 items=0 ppid=4619 pid=4641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:11.853000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261313363363461303561333538663234316432623034373562373639 Jan 24 12:01:11.853000 audit: BPF prog-id=191 op=UNLOAD Jan 24 12:01:11.853000 audit[4641]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4619 pid=4641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:11.853000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261313363363461303561333538663234316432623034373562373639 Jan 24 12:01:11.853000 audit: BPF prog-id=190 op=UNLOAD Jan 24 12:01:11.853000 audit[4641]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4619 pid=4641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:11.853000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261313363363461303561333538663234316432623034373562373639 Jan 24 12:01:11.853000 audit: BPF prog-id=192 op=LOAD Jan 24 12:01:11.853000 audit[4641]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00016e6e8 a2=98 a3=0 items=0 ppid=4619 pid=4641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:11.853000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261313363363461303561333538663234316432623034373562373639 Jan 24 12:01:11.857652 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 12:01:11.861104 systemd-networkd[1510]: calibeebf379367: Gained IPv6LL Jan 24 12:01:11.896755 containerd[1652]: 2026-01-24 12:01:11.040 [INFO][4515] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 12:01:11.896755 containerd[1652]: 2026-01-24 12:01:11.141 [INFO][4515] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--76997bfb4b--ggwxc-eth0 calico-apiserver-76997bfb4b- calico-apiserver 543ea964-5bd2-4a2a-be7e-5b64397ea1f6 949 0 2026-01-24 12:00:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76997bfb4b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-76997bfb4b-ggwxc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali45840fec461 [] [] }} ContainerID="008f46552e05abb4254ae24f85c3d277523be0fa87b70074215423bf740f511e" Namespace="calico-apiserver" Pod="calico-apiserver-76997bfb4b-ggwxc" WorkloadEndpoint="localhost-k8s-calico--apiserver--76997bfb4b--ggwxc-" Jan 24 12:01:11.896755 containerd[1652]: 2026-01-24 12:01:11.142 [INFO][4515] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="008f46552e05abb4254ae24f85c3d277523be0fa87b70074215423bf740f511e" Namespace="calico-apiserver" Pod="calico-apiserver-76997bfb4b-ggwxc" WorkloadEndpoint="localhost-k8s-calico--apiserver--76997bfb4b--ggwxc-eth0" Jan 24 12:01:11.896755 containerd[1652]: 2026-01-24 12:01:11.361 [INFO][4551] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="008f46552e05abb4254ae24f85c3d277523be0fa87b70074215423bf740f511e" HandleID="k8s-pod-network.008f46552e05abb4254ae24f85c3d277523be0fa87b70074215423bf740f511e" Workload="localhost-k8s-calico--apiserver--76997bfb4b--ggwxc-eth0" Jan 24 12:01:11.896755 containerd[1652]: 2026-01-24 12:01:11.361 [INFO][4551] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="008f46552e05abb4254ae24f85c3d277523be0fa87b70074215423bf740f511e" HandleID="k8s-pod-network.008f46552e05abb4254ae24f85c3d277523be0fa87b70074215423bf740f511e" Workload="localhost-k8s-calico--apiserver--76997bfb4b--ggwxc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fbf0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-76997bfb4b-ggwxc", "timestamp":"2026-01-24 12:01:11.361299461 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 12:01:11.896755 containerd[1652]: 2026-01-24 12:01:11.362 [INFO][4551] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 12:01:11.896755 containerd[1652]: 2026-01-24 12:01:11.362 [INFO][4551] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 12:01:11.896755 containerd[1652]: 2026-01-24 12:01:11.362 [INFO][4551] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 12:01:11.896755 containerd[1652]: 2026-01-24 12:01:11.460 [INFO][4551] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.008f46552e05abb4254ae24f85c3d277523be0fa87b70074215423bf740f511e" host="localhost" Jan 24 12:01:11.896755 containerd[1652]: 2026-01-24 12:01:11.506 [INFO][4551] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 12:01:11.896755 containerd[1652]: 2026-01-24 12:01:11.569 [INFO][4551] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 12:01:11.896755 containerd[1652]: 2026-01-24 12:01:11.591 [INFO][4551] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 12:01:11.896755 containerd[1652]: 2026-01-24 12:01:11.614 [INFO][4551] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 12:01:11.896755 containerd[1652]: 2026-01-24 12:01:11.614 [INFO][4551] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.008f46552e05abb4254ae24f85c3d277523be0fa87b70074215423bf740f511e" host="localhost" Jan 24 12:01:11.896755 containerd[1652]: 2026-01-24 12:01:11.636 [INFO][4551] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.008f46552e05abb4254ae24f85c3d277523be0fa87b70074215423bf740f511e Jan 24 12:01:11.896755 containerd[1652]: 2026-01-24 12:01:11.660 [INFO][4551] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.008f46552e05abb4254ae24f85c3d277523be0fa87b70074215423bf740f511e" host="localhost" Jan 24 12:01:11.896755 containerd[1652]: 2026-01-24 12:01:11.705 [INFO][4551] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.008f46552e05abb4254ae24f85c3d277523be0fa87b70074215423bf740f511e" host="localhost" Jan 24 12:01:11.896755 containerd[1652]: 2026-01-24 12:01:11.712 [INFO][4551] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.008f46552e05abb4254ae24f85c3d277523be0fa87b70074215423bf740f511e" host="localhost" Jan 24 12:01:11.896755 containerd[1652]: 2026-01-24 12:01:11.713 [INFO][4551] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 12:01:11.896755 containerd[1652]: 2026-01-24 12:01:11.713 [INFO][4551] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="008f46552e05abb4254ae24f85c3d277523be0fa87b70074215423bf740f511e" HandleID="k8s-pod-network.008f46552e05abb4254ae24f85c3d277523be0fa87b70074215423bf740f511e" Workload="localhost-k8s-calico--apiserver--76997bfb4b--ggwxc-eth0" Jan 24 12:01:11.902038 containerd[1652]: 2026-01-24 12:01:11.735 [INFO][4515] cni-plugin/k8s.go 418: Populated endpoint ContainerID="008f46552e05abb4254ae24f85c3d277523be0fa87b70074215423bf740f511e" Namespace="calico-apiserver" Pod="calico-apiserver-76997bfb4b-ggwxc" WorkloadEndpoint="localhost-k8s-calico--apiserver--76997bfb4b--ggwxc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76997bfb4b--ggwxc-eth0", GenerateName:"calico-apiserver-76997bfb4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"543ea964-5bd2-4a2a-be7e-5b64397ea1f6", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 12, 0, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76997bfb4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-76997bfb4b-ggwxc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali45840fec461", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 12:01:11.902038 containerd[1652]: 2026-01-24 12:01:11.736 [INFO][4515] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="008f46552e05abb4254ae24f85c3d277523be0fa87b70074215423bf740f511e" Namespace="calico-apiserver" Pod="calico-apiserver-76997bfb4b-ggwxc" WorkloadEndpoint="localhost-k8s-calico--apiserver--76997bfb4b--ggwxc-eth0" Jan 24 12:01:11.902038 containerd[1652]: 2026-01-24 12:01:11.736 [INFO][4515] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali45840fec461 ContainerID="008f46552e05abb4254ae24f85c3d277523be0fa87b70074215423bf740f511e" Namespace="calico-apiserver" Pod="calico-apiserver-76997bfb4b-ggwxc" WorkloadEndpoint="localhost-k8s-calico--apiserver--76997bfb4b--ggwxc-eth0" Jan 24 12:01:11.902038 containerd[1652]: 2026-01-24 12:01:11.829 [INFO][4515] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="008f46552e05abb4254ae24f85c3d277523be0fa87b70074215423bf740f511e" Namespace="calico-apiserver" Pod="calico-apiserver-76997bfb4b-ggwxc" WorkloadEndpoint="localhost-k8s-calico--apiserver--76997bfb4b--ggwxc-eth0" Jan 24 12:01:11.902038 containerd[1652]: 2026-01-24 12:01:11.829 [INFO][4515] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="008f46552e05abb4254ae24f85c3d277523be0fa87b70074215423bf740f511e" Namespace="calico-apiserver" Pod="calico-apiserver-76997bfb4b-ggwxc" WorkloadEndpoint="localhost-k8s-calico--apiserver--76997bfb4b--ggwxc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76997bfb4b--ggwxc-eth0", GenerateName:"calico-apiserver-76997bfb4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"543ea964-5bd2-4a2a-be7e-5b64397ea1f6", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 12, 0, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76997bfb4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"008f46552e05abb4254ae24f85c3d277523be0fa87b70074215423bf740f511e", Pod:"calico-apiserver-76997bfb4b-ggwxc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali45840fec461", MAC:"36:fd:d1:e6:09:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 12:01:11.902038 containerd[1652]: 2026-01-24 12:01:11.859 [INFO][4515] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="008f46552e05abb4254ae24f85c3d277523be0fa87b70074215423bf740f511e" Namespace="calico-apiserver" Pod="calico-apiserver-76997bfb4b-ggwxc" WorkloadEndpoint="localhost-k8s-calico--apiserver--76997bfb4b--ggwxc-eth0" Jan 24 12:01:11.929513 containerd[1652]: time="2026-01-24T12:01:11.928358533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76997bfb4b-55f79,Uid:7e54efbd-6a62-4db3-8b3c-99aa330f72d1,Namespace:calico-apiserver,Attempt:0,}" Jan 24 12:01:11.937209 containerd[1652]: time="2026-01-24T12:01:11.934685015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d9dddf448-n9r2d,Uid:dd704f6f-5a9f-42a8-93d9-5d24176bfd82,Namespace:calico-system,Attempt:0,}" Jan 24 12:01:12.169199 containerd[1652]: time="2026-01-24T12:01:12.168385316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-869c797fbb-hltgx,Uid:ec18082a-49e1-4173-9b94-153e655a0861,Namespace:calico-system,Attempt:0,} returns sandbox id \"2a13c64a05a358f241d2b0475b7698387c73a464cad7c9e2fcb2ed5fc985effb\"" Jan 24 12:01:12.180067 systemd-networkd[1510]: cali6fe78416101: Gained IPv6LL Jan 24 12:01:12.247826 containerd[1652]: time="2026-01-24T12:01:12.247736400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 12:01:12.327516 containerd[1652]: time="2026-01-24T12:01:12.327419654Z" level=info msg="connecting to shim 008f46552e05abb4254ae24f85c3d277523be0fa87b70074215423bf740f511e" address="unix:///run/containerd/s/4a40cf627eb39a287fdd6d0a0569923b63b81b48a046535d125ed0f1e566b74a" namespace=k8s.io protocol=ttrpc version=3 Jan 24 12:01:12.364675 containerd[1652]: time="2026-01-24T12:01:12.364597112Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:01:12.378378 systemd-networkd[1510]: calic46019496b0: Gained IPv6LL Jan 24 12:01:12.382365 containerd[1652]: time="2026-01-24T12:01:12.382303676Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 12:01:12.382890 containerd[1652]: time="2026-01-24T12:01:12.382609968Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 24 12:01:12.390335 kubelet[2876]: E0124 12:01:12.385527 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 12:01:12.390335 kubelet[2876]: E0124 12:01:12.388898 2876 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 12:01:12.390335 kubelet[2876]: E0124 12:01:12.389304 2876 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-869c797fbb-hltgx_calico-system(ec18082a-49e1-4173-9b94-153e655a0861): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 12:01:12.395603 containerd[1652]: time="2026-01-24T12:01:12.394985530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 12:01:12.500518 kubelet[2876]: E0124 12:01:12.499032 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:01:12.540875 containerd[1652]: time="2026-01-24T12:01:12.540105879Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:01:12.573866 kubelet[2876]: E0124 12:01:12.573728 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-k2xcd" podUID="f944553c-3de6-4dea-af30-6e177d6839ad" Jan 24 12:01:12.584472 containerd[1652]: time="2026-01-24T12:01:12.583957870Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 12:01:12.584472 containerd[1652]: time="2026-01-24T12:01:12.584087005Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 24 12:01:12.589528 kubelet[2876]: E0124 12:01:12.589299 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 12:01:12.589528 kubelet[2876]: E0124 12:01:12.589483 2876 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 12:01:12.589750 kubelet[2876]: E0124 12:01:12.589694 2876 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-869c797fbb-hltgx_calico-system(ec18082a-49e1-4173-9b94-153e655a0861): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 12:01:12.589839 kubelet[2876]: E0124 12:01:12.589764 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-869c797fbb-hltgx" podUID="ec18082a-49e1-4173-9b94-153e655a0861" Jan 24 12:01:12.660156 systemd[1]: Started cri-containerd-008f46552e05abb4254ae24f85c3d277523be0fa87b70074215423bf740f511e.scope - libcontainer container 008f46552e05abb4254ae24f85c3d277523be0fa87b70074215423bf740f511e. Jan 24 12:01:12.702475 kubelet[2876]: I0124 12:01:12.699715 2876 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-xzzf9" podStartSLOduration=73.69968863 podStartE2EDuration="1m13.69968863s" podCreationTimestamp="2026-01-24 11:59:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 12:01:12.575360781 +0000 UTC m=+79.417572941" watchObservedRunningTime="2026-01-24 12:01:12.69968863 +0000 UTC m=+79.541900810" Jan 24 12:01:12.854759 kernel: kauditd_printk_skb: 93 callbacks suppressed Jan 24 12:01:12.855036 kernel: audit: type=1325 audit(1769256072.831:603): table=filter:119 family=2 entries=20 op=nft_register_rule pid=4809 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:01:12.831000 audit[4809]: NETFILTER_CFG table=filter:119 family=2 entries=20 op=nft_register_rule pid=4809 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:01:12.831000 audit[4809]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc8bcefb80 a2=0 a3=7ffc8bcefb6c items=0 ppid=3041 pid=4809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:12.883129 kernel: audit: type=1300 audit(1769256072.831:603): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc8bcefb80 a2=0 a3=7ffc8bcefb6c items=0 ppid=3041 pid=4809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:12.894948 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 12:01:13.065449 kernel: audit: type=1327 audit(1769256072.831:603): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:01:13.065521 kernel: audit: type=1334 audit(1769256072.858:604): prog-id=193 op=LOAD Jan 24 12:01:13.065646 kernel: audit: type=1334 audit(1769256072.859:605): prog-id=194 op=LOAD Jan 24 12:01:13.065695 kernel: audit: type=1300 audit(1769256072.859:605): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4767 pid=4780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:13.065735 kernel: audit: type=1327 audit(1769256072.859:605): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030386634363535326530356162623432353461653234663835633364 Jan 24 12:01:13.065770 kernel: audit: type=1334 audit(1769256072.859:606): prog-id=194 op=UNLOAD Jan 24 12:01:13.065812 kernel: audit: type=1300 audit(1769256072.859:606): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4767 pid=4780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:13.065902 kernel: audit: type=1327 audit(1769256072.859:606): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030386634363535326530356162623432353461653234663835633364 Jan 24 12:01:12.831000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:01:12.858000 audit: BPF prog-id=193 op=LOAD Jan 24 12:01:12.859000 audit: BPF prog-id=194 op=LOAD Jan 24 12:01:12.859000 audit[4780]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4767 pid=4780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:12.859000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030386634363535326530356162623432353461653234663835633364 Jan 24 12:01:12.859000 audit: BPF prog-id=194 op=UNLOAD Jan 24 12:01:12.859000 audit[4780]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4767 pid=4780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:12.859000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030386634363535326530356162623432353461653234663835633364 Jan 24 12:01:12.864000 audit[4809]: NETFILTER_CFG table=nat:120 family=2 entries=14 op=nft_register_rule pid=4809 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:01:12.864000 audit[4809]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc8bcefb80 a2=0 a3=0 items=0 ppid=3041 pid=4809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:12.864000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:01:12.860000 audit: BPF prog-id=195 op=LOAD Jan 24 12:01:12.860000 audit[4780]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4767 pid=4780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:12.860000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030386634363535326530356162623432353461653234663835633364 Jan 24 12:01:12.864000 audit: BPF prog-id=196 op=LOAD Jan 24 12:01:12.864000 audit[4780]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4767 pid=4780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:12.864000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030386634363535326530356162623432353461653234663835633364 Jan 24 12:01:12.864000 audit: BPF prog-id=196 op=UNLOAD Jan 24 12:01:12.864000 audit[4780]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4767 pid=4780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:12.864000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030386634363535326530356162623432353461653234663835633364 Jan 24 12:01:12.864000 audit: BPF prog-id=195 op=UNLOAD Jan 24 12:01:12.864000 audit[4780]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4767 pid=4780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:12.864000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030386634363535326530356162623432353461653234663835633364 Jan 24 12:01:12.864000 audit: BPF prog-id=197 op=LOAD Jan 24 12:01:12.864000 audit[4780]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4767 pid=4780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:12.864000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030386634363535326530356162623432353461653234663835633364 Jan 24 12:01:13.229669 containerd[1652]: time="2026-01-24T12:01:13.229590718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76997bfb4b-ggwxc,Uid:543ea964-5bd2-4a2a-be7e-5b64397ea1f6,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"008f46552e05abb4254ae24f85c3d277523be0fa87b70074215423bf740f511e\"" Jan 24 12:01:13.233656 containerd[1652]: time="2026-01-24T12:01:13.233468505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 12:01:13.337034 systemd-networkd[1510]: cali82a35205021: Link UP Jan 24 12:01:13.338964 systemd-networkd[1510]: cali82a35205021: Gained carrier Jan 24 12:01:13.369390 containerd[1652]: time="2026-01-24T12:01:13.369333832Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:01:13.379639 containerd[1652]: time="2026-01-24T12:01:13.379496506Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 12:01:13.380012 containerd[1652]: time="2026-01-24T12:01:13.379937243Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 12:01:13.384980 kubelet[2876]: E0124 12:01:13.382287 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 12:01:13.384980 kubelet[2876]: E0124 12:01:13.382367 2876 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 12:01:13.384980 kubelet[2876]: E0124 12:01:13.382459 2876 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-76997bfb4b-ggwxc_calico-apiserver(543ea964-5bd2-4a2a-be7e-5b64397ea1f6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 12:01:13.384980 kubelet[2876]: E0124 12:01:13.382498 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76997bfb4b-ggwxc" podUID="543ea964-5bd2-4a2a-be7e-5b64397ea1f6" Jan 24 12:01:13.398956 containerd[1652]: 2026-01-24 12:01:12.574 [INFO][4735] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 12:01:13.398956 containerd[1652]: 2026-01-24 12:01:12.750 [INFO][4735] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5d9dddf448--n9r2d-eth0 calico-kube-controllers-5d9dddf448- calico-system dd704f6f-5a9f-42a8-93d9-5d24176bfd82 953 0 2026-01-24 12:00:25 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5d9dddf448 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5d9dddf448-n9r2d eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali82a35205021 [] [] }} ContainerID="337b60bd986b7e99dbad21646772665f1bff7d7c1f2d99f08c8411b24d320120" Namespace="calico-system" Pod="calico-kube-controllers-5d9dddf448-n9r2d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d9dddf448--n9r2d-" Jan 24 12:01:13.398956 containerd[1652]: 2026-01-24 12:01:12.750 [INFO][4735] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="337b60bd986b7e99dbad21646772665f1bff7d7c1f2d99f08c8411b24d320120" Namespace="calico-system" Pod="calico-kube-controllers-5d9dddf448-n9r2d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d9dddf448--n9r2d-eth0" Jan 24 12:01:13.398956 containerd[1652]: 2026-01-24 12:01:12.926 [INFO][4813] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="337b60bd986b7e99dbad21646772665f1bff7d7c1f2d99f08c8411b24d320120" HandleID="k8s-pod-network.337b60bd986b7e99dbad21646772665f1bff7d7c1f2d99f08c8411b24d320120" Workload="localhost-k8s-calico--kube--controllers--5d9dddf448--n9r2d-eth0" Jan 24 12:01:13.398956 containerd[1652]: 2026-01-24 12:01:12.927 [INFO][4813] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="337b60bd986b7e99dbad21646772665f1bff7d7c1f2d99f08c8411b24d320120" HandleID="k8s-pod-network.337b60bd986b7e99dbad21646772665f1bff7d7c1f2d99f08c8411b24d320120" Workload="localhost-k8s-calico--kube--controllers--5d9dddf448--n9r2d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00047aef0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5d9dddf448-n9r2d", "timestamp":"2026-01-24 12:01:12.926928659 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 12:01:13.398956 containerd[1652]: 2026-01-24 12:01:12.927 [INFO][4813] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 12:01:13.398956 containerd[1652]: 2026-01-24 12:01:12.927 [INFO][4813] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 12:01:13.398956 containerd[1652]: 2026-01-24 12:01:12.927 [INFO][4813] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 12:01:13.398956 containerd[1652]: 2026-01-24 12:01:13.078 [INFO][4813] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.337b60bd986b7e99dbad21646772665f1bff7d7c1f2d99f08c8411b24d320120" host="localhost" Jan 24 12:01:13.398956 containerd[1652]: 2026-01-24 12:01:13.177 [INFO][4813] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 12:01:13.398956 containerd[1652]: 2026-01-24 12:01:13.211 [INFO][4813] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 12:01:13.398956 containerd[1652]: 2026-01-24 12:01:13.230 [INFO][4813] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 12:01:13.398956 containerd[1652]: 2026-01-24 12:01:13.236 [INFO][4813] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 12:01:13.398956 containerd[1652]: 2026-01-24 12:01:13.239 [INFO][4813] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.337b60bd986b7e99dbad21646772665f1bff7d7c1f2d99f08c8411b24d320120" host="localhost" Jan 24 12:01:13.398956 containerd[1652]: 2026-01-24 12:01:13.249 [INFO][4813] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.337b60bd986b7e99dbad21646772665f1bff7d7c1f2d99f08c8411b24d320120 Jan 24 12:01:13.398956 containerd[1652]: 2026-01-24 12:01:13.261 [INFO][4813] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.337b60bd986b7e99dbad21646772665f1bff7d7c1f2d99f08c8411b24d320120" host="localhost" Jan 24 12:01:13.398956 containerd[1652]: 2026-01-24 12:01:13.300 [INFO][4813] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.337b60bd986b7e99dbad21646772665f1bff7d7c1f2d99f08c8411b24d320120" host="localhost" Jan 24 12:01:13.398956 containerd[1652]: 2026-01-24 12:01:13.300 [INFO][4813] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.337b60bd986b7e99dbad21646772665f1bff7d7c1f2d99f08c8411b24d320120" host="localhost" Jan 24 12:01:13.398956 containerd[1652]: 2026-01-24 12:01:13.300 [INFO][4813] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 12:01:13.398956 containerd[1652]: 2026-01-24 12:01:13.300 [INFO][4813] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="337b60bd986b7e99dbad21646772665f1bff7d7c1f2d99f08c8411b24d320120" HandleID="k8s-pod-network.337b60bd986b7e99dbad21646772665f1bff7d7c1f2d99f08c8411b24d320120" Workload="localhost-k8s-calico--kube--controllers--5d9dddf448--n9r2d-eth0" Jan 24 12:01:13.400501 containerd[1652]: 2026-01-24 12:01:13.330 [INFO][4735] cni-plugin/k8s.go 418: Populated endpoint ContainerID="337b60bd986b7e99dbad21646772665f1bff7d7c1f2d99f08c8411b24d320120" Namespace="calico-system" Pod="calico-kube-controllers-5d9dddf448-n9r2d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d9dddf448--n9r2d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5d9dddf448--n9r2d-eth0", GenerateName:"calico-kube-controllers-5d9dddf448-", Namespace:"calico-system", SelfLink:"", UID:"dd704f6f-5a9f-42a8-93d9-5d24176bfd82", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 12, 0, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d9dddf448", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5d9dddf448-n9r2d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali82a35205021", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 12:01:13.400501 containerd[1652]: 2026-01-24 12:01:13.331 [INFO][4735] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="337b60bd986b7e99dbad21646772665f1bff7d7c1f2d99f08c8411b24d320120" Namespace="calico-system" Pod="calico-kube-controllers-5d9dddf448-n9r2d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d9dddf448--n9r2d-eth0" Jan 24 12:01:13.400501 containerd[1652]: 2026-01-24 12:01:13.331 [INFO][4735] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali82a35205021 ContainerID="337b60bd986b7e99dbad21646772665f1bff7d7c1f2d99f08c8411b24d320120" Namespace="calico-system" Pod="calico-kube-controllers-5d9dddf448-n9r2d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d9dddf448--n9r2d-eth0" Jan 24 12:01:13.400501 containerd[1652]: 2026-01-24 12:01:13.345 [INFO][4735] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="337b60bd986b7e99dbad21646772665f1bff7d7c1f2d99f08c8411b24d320120" Namespace="calico-system" Pod="calico-kube-controllers-5d9dddf448-n9r2d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d9dddf448--n9r2d-eth0" Jan 24 12:01:13.400501 containerd[1652]: 2026-01-24 12:01:13.357 [INFO][4735] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="337b60bd986b7e99dbad21646772665f1bff7d7c1f2d99f08c8411b24d320120" Namespace="calico-system" Pod="calico-kube-controllers-5d9dddf448-n9r2d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d9dddf448--n9r2d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5d9dddf448--n9r2d-eth0", GenerateName:"calico-kube-controllers-5d9dddf448-", Namespace:"calico-system", SelfLink:"", UID:"dd704f6f-5a9f-42a8-93d9-5d24176bfd82", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 12, 0, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d9dddf448", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"337b60bd986b7e99dbad21646772665f1bff7d7c1f2d99f08c8411b24d320120", Pod:"calico-kube-controllers-5d9dddf448-n9r2d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali82a35205021", MAC:"96:64:52:bf:70:28", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 12:01:13.400501 containerd[1652]: 2026-01-24 12:01:13.392 [INFO][4735] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="337b60bd986b7e99dbad21646772665f1bff7d7c1f2d99f08c8411b24d320120" Namespace="calico-system" Pod="calico-kube-controllers-5d9dddf448-n9r2d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d9dddf448--n9r2d-eth0" Jan 24 12:01:13.565737 containerd[1652]: time="2026-01-24T12:01:13.562754941Z" level=info msg="connecting to shim 337b60bd986b7e99dbad21646772665f1bff7d7c1f2d99f08c8411b24d320120" address="unix:///run/containerd/s/429b905e30192c912891c56ed290b71b549c4f3d07cbfa2962ddd4658d21c4d8" namespace=k8s.io protocol=ttrpc version=3 Jan 24 12:01:13.568537 kubelet[2876]: E0124 12:01:13.568454 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:01:13.589094 kubelet[2876]: E0124 12:01:13.588919 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-869c797fbb-hltgx" podUID="ec18082a-49e1-4173-9b94-153e655a0861" Jan 24 12:01:13.600027 kubelet[2876]: E0124 12:01:13.598735 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76997bfb4b-ggwxc" podUID="543ea964-5bd2-4a2a-be7e-5b64397ea1f6" Jan 24 12:01:13.656000 audit: BPF prog-id=198 op=LOAD Jan 24 12:01:13.656000 audit[4903]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe122b7f80 a2=98 a3=1fffffffffffffff items=0 ppid=4582 pid=4903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:13.656000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 12:01:13.656000 audit: BPF prog-id=198 op=UNLOAD Jan 24 12:01:13.656000 audit[4903]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe122b7f50 a3=0 items=0 ppid=4582 pid=4903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:13.656000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 12:01:13.656000 audit: BPF prog-id=199 op=LOAD Jan 24 12:01:13.656000 audit[4903]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe122b7e60 a2=94 a3=3 items=0 ppid=4582 pid=4903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:13.656000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 12:01:13.656000 audit: BPF prog-id=199 op=UNLOAD Jan 24 12:01:13.656000 audit[4903]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe122b7e60 a2=94 a3=3 items=0 ppid=4582 pid=4903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:13.656000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 12:01:13.656000 audit: BPF prog-id=200 op=LOAD Jan 24 12:01:13.656000 audit[4903]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe122b7ea0 a2=94 a3=7ffe122b8080 items=0 ppid=4582 pid=4903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:13.656000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 12:01:13.657000 audit: BPF prog-id=200 op=UNLOAD Jan 24 12:01:13.657000 audit[4903]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe122b7ea0 a2=94 a3=7ffe122b8080 items=0 ppid=4582 pid=4903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:13.657000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 12:01:13.663000 audit: BPF prog-id=201 op=LOAD Jan 24 12:01:13.663000 audit[4910]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd30ea24a0 a2=98 a3=3 items=0 ppid=4582 pid=4910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:13.663000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 12:01:13.663000 audit: BPF prog-id=201 op=UNLOAD Jan 24 12:01:13.663000 audit[4910]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd30ea2470 a3=0 items=0 ppid=4582 pid=4910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:13.663000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 12:01:13.663000 audit: BPF prog-id=202 op=LOAD Jan 24 12:01:13.663000 audit[4910]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd30ea2290 a2=94 a3=54428f items=0 ppid=4582 pid=4910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:13.663000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 12:01:13.667000 audit: BPF prog-id=202 op=UNLOAD Jan 24 12:01:13.667000 audit[4910]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd30ea2290 a2=94 a3=54428f items=0 ppid=4582 pid=4910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:13.667000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 12:01:13.667000 audit: BPF prog-id=203 op=LOAD Jan 24 12:01:13.667000 audit[4910]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd30ea22c0 a2=94 a3=2 items=0 ppid=4582 pid=4910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:13.667000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 12:01:13.667000 audit: BPF prog-id=203 op=UNLOAD Jan 24 12:01:13.667000 audit[4910]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd30ea22c0 a2=0 a3=2 items=0 ppid=4582 pid=4910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:13.667000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 12:01:13.719673 systemd[1]: Started cri-containerd-337b60bd986b7e99dbad21646772665f1bff7d7c1f2d99f08c8411b24d320120.scope - libcontainer container 337b60bd986b7e99dbad21646772665f1bff7d7c1f2d99f08c8411b24d320120. Jan 24 12:01:13.758992 systemd-networkd[1510]: calia4bc5d8a42b: Link UP Jan 24 12:01:13.762007 systemd-networkd[1510]: calia4bc5d8a42b: Gained carrier Jan 24 12:01:13.780739 systemd-networkd[1510]: cali45840fec461: Gained IPv6LL Jan 24 12:01:13.860623 containerd[1652]: 2026-01-24 12:01:12.547 [INFO][4730] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 12:01:13.860623 containerd[1652]: 2026-01-24 12:01:12.693 [INFO][4730] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--76997bfb4b--55f79-eth0 calico-apiserver-76997bfb4b- calico-apiserver 7e54efbd-6a62-4db3-8b3c-99aa330f72d1 946 0 2026-01-24 12:00:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76997bfb4b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-76997bfb4b-55f79 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia4bc5d8a42b [] [] }} ContainerID="21117cbf0a00d2757140303e037dd47fedefb2ba4f261496f10be742528f6f82" Namespace="calico-apiserver" Pod="calico-apiserver-76997bfb4b-55f79" WorkloadEndpoint="localhost-k8s-calico--apiserver--76997bfb4b--55f79-" Jan 24 12:01:13.860623 containerd[1652]: 2026-01-24 12:01:12.693 [INFO][4730] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="21117cbf0a00d2757140303e037dd47fedefb2ba4f261496f10be742528f6f82" Namespace="calico-apiserver" Pod="calico-apiserver-76997bfb4b-55f79" WorkloadEndpoint="localhost-k8s-calico--apiserver--76997bfb4b--55f79-eth0" Jan 24 12:01:13.860623 containerd[1652]: 2026-01-24 12:01:13.004 [INFO][4811] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="21117cbf0a00d2757140303e037dd47fedefb2ba4f261496f10be742528f6f82" HandleID="k8s-pod-network.21117cbf0a00d2757140303e037dd47fedefb2ba4f261496f10be742528f6f82" Workload="localhost-k8s-calico--apiserver--76997bfb4b--55f79-eth0" Jan 24 12:01:13.860623 containerd[1652]: 2026-01-24 12:01:13.004 [INFO][4811] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="21117cbf0a00d2757140303e037dd47fedefb2ba4f261496f10be742528f6f82" HandleID="k8s-pod-network.21117cbf0a00d2757140303e037dd47fedefb2ba4f261496f10be742528f6f82" Workload="localhost-k8s-calico--apiserver--76997bfb4b--55f79-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004879b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-76997bfb4b-55f79", "timestamp":"2026-01-24 12:01:13.004253199 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 12:01:13.860623 containerd[1652]: 2026-01-24 12:01:13.005 [INFO][4811] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 12:01:13.860623 containerd[1652]: 2026-01-24 12:01:13.300 [INFO][4811] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 12:01:13.860623 containerd[1652]: 2026-01-24 12:01:13.327 [INFO][4811] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 12:01:13.860623 containerd[1652]: 2026-01-24 12:01:13.385 [INFO][4811] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.21117cbf0a00d2757140303e037dd47fedefb2ba4f261496f10be742528f6f82" host="localhost" Jan 24 12:01:13.860623 containerd[1652]: 2026-01-24 12:01:13.440 [INFO][4811] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 12:01:13.860623 containerd[1652]: 2026-01-24 12:01:13.472 [INFO][4811] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 12:01:13.860623 containerd[1652]: 2026-01-24 12:01:13.479 [INFO][4811] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 12:01:13.860623 containerd[1652]: 2026-01-24 12:01:13.503 [INFO][4811] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 12:01:13.860623 containerd[1652]: 2026-01-24 12:01:13.505 [INFO][4811] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.21117cbf0a00d2757140303e037dd47fedefb2ba4f261496f10be742528f6f82" host="localhost" Jan 24 12:01:13.860623 containerd[1652]: 2026-01-24 12:01:13.566 [INFO][4811] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.21117cbf0a00d2757140303e037dd47fedefb2ba4f261496f10be742528f6f82 Jan 24 12:01:13.860623 containerd[1652]: 2026-01-24 12:01:13.654 [INFO][4811] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.21117cbf0a00d2757140303e037dd47fedefb2ba4f261496f10be742528f6f82" host="localhost" Jan 24 12:01:13.860623 containerd[1652]: 2026-01-24 12:01:13.705 [INFO][4811] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.21117cbf0a00d2757140303e037dd47fedefb2ba4f261496f10be742528f6f82" host="localhost" Jan 24 12:01:13.860623 containerd[1652]: 2026-01-24 12:01:13.705 [INFO][4811] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.21117cbf0a00d2757140303e037dd47fedefb2ba4f261496f10be742528f6f82" host="localhost" Jan 24 12:01:13.860623 containerd[1652]: 2026-01-24 12:01:13.705 [INFO][4811] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 12:01:13.860623 containerd[1652]: 2026-01-24 12:01:13.705 [INFO][4811] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="21117cbf0a00d2757140303e037dd47fedefb2ba4f261496f10be742528f6f82" HandleID="k8s-pod-network.21117cbf0a00d2757140303e037dd47fedefb2ba4f261496f10be742528f6f82" Workload="localhost-k8s-calico--apiserver--76997bfb4b--55f79-eth0" Jan 24 12:01:13.864711 containerd[1652]: 2026-01-24 12:01:13.714 [INFO][4730] cni-plugin/k8s.go 418: Populated endpoint ContainerID="21117cbf0a00d2757140303e037dd47fedefb2ba4f261496f10be742528f6f82" Namespace="calico-apiserver" Pod="calico-apiserver-76997bfb4b-55f79" WorkloadEndpoint="localhost-k8s-calico--apiserver--76997bfb4b--55f79-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76997bfb4b--55f79-eth0", GenerateName:"calico-apiserver-76997bfb4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"7e54efbd-6a62-4db3-8b3c-99aa330f72d1", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 12, 0, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76997bfb4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-76997bfb4b-55f79", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia4bc5d8a42b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 12:01:13.864711 containerd[1652]: 2026-01-24 12:01:13.714 [INFO][4730] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="21117cbf0a00d2757140303e037dd47fedefb2ba4f261496f10be742528f6f82" Namespace="calico-apiserver" Pod="calico-apiserver-76997bfb4b-55f79" WorkloadEndpoint="localhost-k8s-calico--apiserver--76997bfb4b--55f79-eth0" Jan 24 12:01:13.864711 containerd[1652]: 2026-01-24 12:01:13.714 [INFO][4730] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia4bc5d8a42b ContainerID="21117cbf0a00d2757140303e037dd47fedefb2ba4f261496f10be742528f6f82" Namespace="calico-apiserver" Pod="calico-apiserver-76997bfb4b-55f79" WorkloadEndpoint="localhost-k8s-calico--apiserver--76997bfb4b--55f79-eth0" Jan 24 12:01:13.864711 containerd[1652]: 2026-01-24 12:01:13.770 [INFO][4730] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="21117cbf0a00d2757140303e037dd47fedefb2ba4f261496f10be742528f6f82" Namespace="calico-apiserver" Pod="calico-apiserver-76997bfb4b-55f79" WorkloadEndpoint="localhost-k8s-calico--apiserver--76997bfb4b--55f79-eth0" Jan 24 12:01:13.864711 containerd[1652]: 2026-01-24 12:01:13.780 [INFO][4730] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="21117cbf0a00d2757140303e037dd47fedefb2ba4f261496f10be742528f6f82" Namespace="calico-apiserver" Pod="calico-apiserver-76997bfb4b-55f79" WorkloadEndpoint="localhost-k8s-calico--apiserver--76997bfb4b--55f79-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76997bfb4b--55f79-eth0", GenerateName:"calico-apiserver-76997bfb4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"7e54efbd-6a62-4db3-8b3c-99aa330f72d1", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 12, 0, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76997bfb4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"21117cbf0a00d2757140303e037dd47fedefb2ba4f261496f10be742528f6f82", Pod:"calico-apiserver-76997bfb4b-55f79", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia4bc5d8a42b", MAC:"3e:ac:e0:3d:c0:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 12:01:13.864711 containerd[1652]: 2026-01-24 12:01:13.844 [INFO][4730] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="21117cbf0a00d2757140303e037dd47fedefb2ba4f261496f10be742528f6f82" Namespace="calico-apiserver" Pod="calico-apiserver-76997bfb4b-55f79" WorkloadEndpoint="localhost-k8s-calico--apiserver--76997bfb4b--55f79-eth0" Jan 24 12:01:13.867000 audit: BPF prog-id=204 op=LOAD Jan 24 12:01:13.869000 audit: BPF prog-id=205 op=LOAD Jan 24 12:01:13.869000 audit[4897]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4884 pid=4897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:13.869000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333376236306264393836623765393964626164323136343637373236 Jan 24 12:01:13.870000 audit: BPF prog-id=205 op=UNLOAD Jan 24 12:01:13.870000 audit[4897]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4884 pid=4897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:13.870000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333376236306264393836623765393964626164323136343637373236 Jan 24 12:01:13.870000 audit: BPF prog-id=206 op=LOAD Jan 24 12:01:13.870000 audit[4897]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4884 pid=4897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:13.870000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333376236306264393836623765393964626164323136343637373236 Jan 24 12:01:13.870000 audit: BPF prog-id=207 op=LOAD Jan 24 12:01:13.870000 audit[4897]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4884 pid=4897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:13.870000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333376236306264393836623765393964626164323136343637373236 Jan 24 12:01:13.870000 audit: BPF prog-id=207 op=UNLOAD Jan 24 12:01:13.870000 audit[4897]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4884 pid=4897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:13.870000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333376236306264393836623765393964626164323136343637373236 Jan 24 12:01:13.870000 audit: BPF prog-id=206 op=UNLOAD Jan 24 12:01:13.870000 audit[4897]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4884 pid=4897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:13.870000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333376236306264393836623765393964626164323136343637373236 Jan 24 12:01:13.870000 audit: BPF prog-id=208 op=LOAD Jan 24 12:01:13.870000 audit[4897]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4884 pid=4897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:13.870000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333376236306264393836623765393964626164323136343637373236 Jan 24 12:01:13.873925 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 12:01:13.991599 containerd[1652]: time="2026-01-24T12:01:13.991495643Z" level=info msg="connecting to shim 21117cbf0a00d2757140303e037dd47fedefb2ba4f261496f10be742528f6f82" address="unix:///run/containerd/s/a4c134d3abe74140209e63262f1cb4da326d2468e7259f89be2add2d4fbf8db6" namespace=k8s.io protocol=ttrpc version=3 Jan 24 12:01:14.044065 containerd[1652]: time="2026-01-24T12:01:14.043455181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d9dddf448-n9r2d,Uid:dd704f6f-5a9f-42a8-93d9-5d24176bfd82,Namespace:calico-system,Attempt:0,} returns sandbox id \"337b60bd986b7e99dbad21646772665f1bff7d7c1f2d99f08c8411b24d320120\"" Jan 24 12:01:14.047786 containerd[1652]: time="2026-01-24T12:01:14.047754283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 12:01:14.114460 systemd[1]: Started cri-containerd-21117cbf0a00d2757140303e037dd47fedefb2ba4f261496f10be742528f6f82.scope - libcontainer container 21117cbf0a00d2757140303e037dd47fedefb2ba4f261496f10be742528f6f82. Jan 24 12:01:14.130456 containerd[1652]: time="2026-01-24T12:01:14.130223510Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:01:14.135185 containerd[1652]: time="2026-01-24T12:01:14.135139118Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 12:01:14.135400 containerd[1652]: time="2026-01-24T12:01:14.135373424Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 24 12:01:14.135843 kubelet[2876]: E0124 12:01:14.135786 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 12:01:14.136281 kubelet[2876]: E0124 12:01:14.136244 2876 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 12:01:14.137210 kubelet[2876]: E0124 12:01:14.137071 2876 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5d9dddf448-n9r2d_calico-system(dd704f6f-5a9f-42a8-93d9-5d24176bfd82): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 12:01:14.137210 kubelet[2876]: E0124 12:01:14.137130 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d9dddf448-n9r2d" podUID="dd704f6f-5a9f-42a8-93d9-5d24176bfd82" Jan 24 12:01:14.175000 audit[4975]: NETFILTER_CFG table=filter:121 family=2 entries=17 op=nft_register_rule pid=4975 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:01:14.175000 audit[4975]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc748f6cc0 a2=0 a3=7ffc748f6cac items=0 ppid=3041 pid=4975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:14.175000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:01:14.183000 audit[4975]: NETFILTER_CFG table=nat:122 family=2 entries=35 op=nft_register_chain pid=4975 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:01:14.183000 audit[4975]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc748f6cc0 a2=0 a3=7ffc748f6cac items=0 ppid=3041 pid=4975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:14.183000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:01:14.193000 audit: BPF prog-id=209 op=LOAD Jan 24 12:01:14.196000 audit: BPF prog-id=210 op=LOAD Jan 24 12:01:14.196000 audit[4954]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=4934 pid=4954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:14.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231313137636266306130306432373537313430333033653033376464 Jan 24 12:01:14.196000 audit: BPF prog-id=210 op=UNLOAD Jan 24 12:01:14.196000 audit[4954]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4934 pid=4954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:14.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231313137636266306130306432373537313430333033653033376464 Jan 24 12:01:14.197000 audit: BPF prog-id=211 op=LOAD Jan 24 12:01:14.197000 audit[4954]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=4934 pid=4954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:14.197000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231313137636266306130306432373537313430333033653033376464 Jan 24 12:01:14.197000 audit: BPF prog-id=212 op=LOAD Jan 24 12:01:14.197000 audit[4954]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=4934 pid=4954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:14.197000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231313137636266306130306432373537313430333033653033376464 Jan 24 12:01:14.197000 audit: BPF prog-id=212 op=UNLOAD Jan 24 12:01:14.197000 audit[4954]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4934 pid=4954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:14.197000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231313137636266306130306432373537313430333033653033376464 Jan 24 12:01:14.197000 audit: BPF prog-id=211 op=UNLOAD Jan 24 12:01:14.197000 audit[4954]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4934 pid=4954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:14.197000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231313137636266306130306432373537313430333033653033376464 Jan 24 12:01:14.197000 audit: BPF prog-id=213 op=LOAD Jan 24 12:01:14.197000 audit[4954]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=4934 pid=4954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:14.197000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231313137636266306130306432373537313430333033653033376464 Jan 24 12:01:14.210722 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 12:01:14.361000 audit: BPF prog-id=214 op=LOAD Jan 24 12:01:14.361000 audit[4910]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd30ea2180 a2=94 a3=1 items=0 ppid=4582 pid=4910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:14.361000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 12:01:14.361000 audit: BPF prog-id=214 op=UNLOAD Jan 24 12:01:14.361000 audit[4910]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd30ea2180 a2=94 a3=1 items=0 ppid=4582 pid=4910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:14.361000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 12:01:14.387978 containerd[1652]: time="2026-01-24T12:01:14.387923368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76997bfb4b-55f79,Uid:7e54efbd-6a62-4db3-8b3c-99aa330f72d1,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"21117cbf0a00d2757140303e037dd47fedefb2ba4f261496f10be742528f6f82\"" Jan 24 12:01:14.394947 containerd[1652]: time="2026-01-24T12:01:14.394338482Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 12:01:14.396000 audit: BPF prog-id=215 op=LOAD Jan 24 12:01:14.396000 audit[4910]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd30ea2170 a2=94 a3=4 items=0 ppid=4582 pid=4910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:14.396000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 12:01:14.397000 audit: BPF prog-id=215 op=UNLOAD Jan 24 12:01:14.397000 audit[4910]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd30ea2170 a2=0 a3=4 items=0 ppid=4582 pid=4910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:14.397000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 12:01:14.399000 audit: BPF prog-id=216 op=LOAD Jan 24 12:01:14.399000 audit[4910]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd30ea1fd0 a2=94 a3=5 items=0 ppid=4582 pid=4910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:14.399000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 12:01:14.399000 audit: BPF prog-id=216 op=UNLOAD Jan 24 12:01:14.399000 audit[4910]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd30ea1fd0 a2=0 a3=5 items=0 ppid=4582 pid=4910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:14.399000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 12:01:14.399000 audit: BPF prog-id=217 op=LOAD Jan 24 12:01:14.399000 audit[4910]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd30ea21f0 a2=94 a3=6 items=0 ppid=4582 pid=4910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:14.399000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 12:01:14.399000 audit: BPF prog-id=217 op=UNLOAD Jan 24 12:01:14.399000 audit[4910]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd30ea21f0 a2=0 a3=6 items=0 ppid=4582 pid=4910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:14.399000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 12:01:14.402000 audit: BPF prog-id=218 op=LOAD Jan 24 12:01:14.402000 audit[4910]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd30ea19a0 a2=94 a3=88 items=0 ppid=4582 pid=4910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:14.402000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 12:01:14.403000 audit: BPF prog-id=219 op=LOAD Jan 24 12:01:14.403000 audit[4910]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffd30ea1820 a2=94 a3=2 items=0 ppid=4582 pid=4910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:14.403000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 12:01:14.403000 audit: BPF prog-id=219 op=UNLOAD Jan 24 12:01:14.403000 audit[4910]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffd30ea1850 a2=0 a3=7ffd30ea1950 items=0 ppid=4582 pid=4910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:14.403000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 12:01:14.403000 audit: BPF prog-id=218 op=UNLOAD Jan 24 12:01:14.403000 audit[4910]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=50f9d10 a2=0 a3=a0efe453709007fd items=0 ppid=4582 pid=4910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:14.403000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 12:01:14.435000 audit: BPF prog-id=220 op=LOAD Jan 24 12:01:14.435000 audit[4984]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd8bca8f60 a2=98 a3=1999999999999999 items=0 ppid=4582 pid=4984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:14.435000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 24 12:01:14.435000 audit: BPF prog-id=220 op=UNLOAD Jan 24 12:01:14.435000 audit[4984]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd8bca8f30 a3=0 items=0 ppid=4582 pid=4984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:14.435000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 24 12:01:14.436000 audit: BPF prog-id=221 op=LOAD Jan 24 12:01:14.436000 audit[4984]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd8bca8e40 a2=94 a3=ffff items=0 ppid=4582 pid=4984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:14.436000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 24 12:01:14.436000 audit: BPF prog-id=221 op=UNLOAD Jan 24 12:01:14.436000 audit[4984]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd8bca8e40 a2=94 a3=ffff items=0 ppid=4582 pid=4984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:14.436000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 24 12:01:14.436000 audit: BPF prog-id=222 op=LOAD Jan 24 12:01:14.436000 audit[4984]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd8bca8e80 a2=94 a3=7ffd8bca9060 items=0 ppid=4582 pid=4984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:14.436000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 24 12:01:14.436000 audit: BPF prog-id=222 op=UNLOAD Jan 24 12:01:14.436000 audit[4984]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd8bca8e80 a2=94 a3=7ffd8bca9060 items=0 ppid=4582 pid=4984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:14.436000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 24 12:01:14.479755 containerd[1652]: time="2026-01-24T12:01:14.479493641Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:01:14.489367 containerd[1652]: time="2026-01-24T12:01:14.489161087Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 12:01:14.489367 containerd[1652]: time="2026-01-24T12:01:14.489227022Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 12:01:14.493219 kubelet[2876]: E0124 12:01:14.493158 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 12:01:14.493430 kubelet[2876]: E0124 12:01:14.493399 2876 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 12:01:14.501260 kubelet[2876]: E0124 12:01:14.501125 2876 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-76997bfb4b-55f79_calico-apiserver(7e54efbd-6a62-4db3-8b3c-99aa330f72d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 12:01:14.505326 kubelet[2876]: E0124 12:01:14.505285 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76997bfb4b-55f79" podUID="7e54efbd-6a62-4db3-8b3c-99aa330f72d1" Jan 24 12:01:14.584311 kubelet[2876]: E0124 12:01:14.584168 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d9dddf448-n9r2d" podUID="dd704f6f-5a9f-42a8-93d9-5d24176bfd82" Jan 24 12:01:14.589150 kubelet[2876]: E0124 12:01:14.589025 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76997bfb4b-ggwxc" podUID="543ea964-5bd2-4a2a-be7e-5b64397ea1f6" Jan 24 12:01:14.590619 kubelet[2876]: E0124 12:01:14.590522 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:01:14.591784 kubelet[2876]: E0124 12:01:14.591707 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76997bfb4b-55f79" podUID="7e54efbd-6a62-4db3-8b3c-99aa330f72d1" Jan 24 12:01:14.780963 systemd-networkd[1510]: vxlan.calico: Link UP Jan 24 12:01:14.781056 systemd-networkd[1510]: vxlan.calico: Gained carrier Jan 24 12:01:14.932000 audit: BPF prog-id=223 op=LOAD Jan 24 12:01:14.932000 audit[5009]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe38dbb6e0 a2=98 a3=0 items=0 ppid=4582 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:14.932000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 12:01:14.932000 audit: BPF prog-id=223 op=UNLOAD Jan 24 12:01:14.932000 audit[5009]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe38dbb6b0 a3=0 items=0 ppid=4582 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:14.932000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 12:01:14.932000 audit: BPF prog-id=224 op=LOAD Jan 24 12:01:14.932000 audit[5009]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe38dbb4f0 a2=94 a3=54428f items=0 ppid=4582 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:14.932000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 12:01:14.932000 audit: BPF prog-id=224 op=UNLOAD Jan 24 12:01:14.932000 audit[5009]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe38dbb4f0 a2=94 a3=54428f items=0 ppid=4582 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:14.932000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 12:01:14.932000 audit: BPF prog-id=225 op=LOAD Jan 24 12:01:14.932000 audit[5009]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe38dbb520 a2=94 a3=2 items=0 ppid=4582 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:14.932000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 12:01:14.932000 audit: BPF prog-id=225 op=UNLOAD Jan 24 12:01:14.932000 audit[5009]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe38dbb520 a2=0 a3=2 items=0 ppid=4582 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:14.932000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 12:01:14.933000 audit: BPF prog-id=226 op=LOAD Jan 24 12:01:14.933000 audit[5009]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe38dbb2d0 a2=94 a3=4 items=0 ppid=4582 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:14.933000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 12:01:14.933000 audit: BPF prog-id=226 op=UNLOAD Jan 24 12:01:14.933000 audit[5009]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe38dbb2d0 a2=94 a3=4 items=0 ppid=4582 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:14.933000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 12:01:14.933000 audit: BPF prog-id=227 op=LOAD Jan 24 12:01:14.933000 audit[5009]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe38dbb3d0 a2=94 a3=7ffe38dbb550 items=0 ppid=4582 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:14.933000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 12:01:14.933000 audit: BPF prog-id=227 op=UNLOAD Jan 24 12:01:14.933000 audit[5009]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe38dbb3d0 a2=0 a3=7ffe38dbb550 items=0 ppid=4582 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:14.933000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 12:01:14.939000 audit: BPF prog-id=228 op=LOAD Jan 24 12:01:14.939000 audit[5009]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe38dbab00 a2=94 a3=2 items=0 ppid=4582 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:14.939000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 12:01:14.939000 audit: BPF prog-id=228 op=UNLOAD Jan 24 12:01:14.939000 audit[5009]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe38dbab00 a2=0 a3=2 items=0 ppid=4582 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:14.939000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 12:01:14.939000 audit: BPF prog-id=229 op=LOAD Jan 24 12:01:14.939000 audit[5009]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe38dbac00 a2=94 a3=30 items=0 ppid=4582 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:14.939000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 12:01:14.999840 systemd-networkd[1510]: calia4bc5d8a42b: Gained IPv6LL Jan 24 12:01:15.031000 audit: BPF prog-id=230 op=LOAD Jan 24 12:01:15.031000 audit[5018]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff3cc78d50 a2=98 a3=0 items=0 ppid=4582 pid=5018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:15.031000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 12:01:15.032000 audit: BPF prog-id=230 op=UNLOAD Jan 24 12:01:15.032000 audit[5018]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff3cc78d20 a3=0 items=0 ppid=4582 pid=5018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:15.032000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 12:01:15.032000 audit: BPF prog-id=231 op=LOAD Jan 24 12:01:15.032000 audit[5018]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff3cc78b40 a2=94 a3=54428f items=0 ppid=4582 pid=5018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:15.032000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 12:01:15.032000 audit: BPF prog-id=231 op=UNLOAD Jan 24 12:01:15.032000 audit[5018]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff3cc78b40 a2=94 a3=54428f items=0 ppid=4582 pid=5018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:15.032000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 12:01:15.032000 audit: BPF prog-id=232 op=LOAD Jan 24 12:01:15.032000 audit[5018]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff3cc78b70 a2=94 a3=2 items=0 ppid=4582 pid=5018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:15.032000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 12:01:15.032000 audit: BPF prog-id=232 op=UNLOAD Jan 24 12:01:15.032000 audit[5018]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff3cc78b70 a2=0 a3=2 items=0 ppid=4582 pid=5018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:15.032000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 12:01:15.188001 systemd-networkd[1510]: cali82a35205021: Gained IPv6LL Jan 24 12:01:15.262000 audit[5021]: NETFILTER_CFG table=filter:123 family=2 entries=14 op=nft_register_rule pid=5021 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:01:15.262000 audit[5021]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdb47b44e0 a2=0 a3=7ffdb47b44cc items=0 ppid=3041 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:15.262000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:01:15.282000 audit[5021]: NETFILTER_CFG table=nat:124 family=2 entries=20 op=nft_register_rule pid=5021 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:01:15.282000 audit[5021]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffdb47b44e0 a2=0 a3=7ffdb47b44cc items=0 ppid=3041 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:15.282000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:01:15.574000 audit: BPF prog-id=233 op=LOAD Jan 24 12:01:15.574000 audit[5018]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff3cc78a30 a2=94 a3=1 items=0 ppid=4582 pid=5018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:15.574000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 12:01:15.574000 audit: BPF prog-id=233 op=UNLOAD Jan 24 12:01:15.574000 audit[5018]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff3cc78a30 a2=94 a3=1 items=0 ppid=4582 pid=5018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:15.574000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 12:01:15.593000 audit: BPF prog-id=234 op=LOAD Jan 24 12:01:15.593000 audit[5018]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff3cc78a20 a2=94 a3=4 items=0 ppid=4582 pid=5018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:15.593000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 12:01:15.593000 audit: BPF prog-id=234 op=UNLOAD Jan 24 12:01:15.593000 audit[5018]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff3cc78a20 a2=0 a3=4 items=0 ppid=4582 pid=5018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:15.593000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 12:01:15.594000 audit: BPF prog-id=235 op=LOAD Jan 24 12:01:15.594000 audit[5018]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff3cc78880 a2=94 a3=5 items=0 ppid=4582 pid=5018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:15.594000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 12:01:15.594000 audit: BPF prog-id=235 op=UNLOAD Jan 24 12:01:15.594000 audit[5018]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff3cc78880 a2=0 a3=5 items=0 ppid=4582 pid=5018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:15.594000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 12:01:15.594000 audit: BPF prog-id=236 op=LOAD Jan 24 12:01:15.594000 audit[5018]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff3cc78aa0 a2=94 a3=6 items=0 ppid=4582 pid=5018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:15.594000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 12:01:15.594000 audit: BPF prog-id=236 op=UNLOAD Jan 24 12:01:15.594000 audit[5018]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff3cc78aa0 a2=0 a3=6 items=0 ppid=4582 pid=5018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:15.594000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 12:01:15.594000 audit: BPF prog-id=237 op=LOAD Jan 24 12:01:15.594000 audit[5018]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff3cc78250 a2=94 a3=88 items=0 ppid=4582 pid=5018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:15.594000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 12:01:15.595000 audit: BPF prog-id=238 op=LOAD Jan 24 12:01:15.595000 audit[5018]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff3cc780d0 a2=94 a3=2 items=0 ppid=4582 pid=5018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:15.595000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 12:01:15.595000 audit: BPF prog-id=238 op=UNLOAD Jan 24 12:01:15.595000 audit[5018]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff3cc78100 a2=0 a3=7fff3cc78200 items=0 ppid=4582 pid=5018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:15.595000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 12:01:15.596000 audit: BPF prog-id=237 op=UNLOAD Jan 24 12:01:15.596000 audit[5018]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=17b41d10 a2=0 a3=dc1776febc8adbf6 items=0 ppid=4582 pid=5018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:15.596000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 12:01:15.621786 kubelet[2876]: E0124 12:01:15.600092 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76997bfb4b-55f79" podUID="7e54efbd-6a62-4db3-8b3c-99aa330f72d1" Jan 24 12:01:15.621786 kubelet[2876]: E0124 12:01:15.607643 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d9dddf448-n9r2d" podUID="dd704f6f-5a9f-42a8-93d9-5d24176bfd82" Jan 24 12:01:15.633000 audit: BPF prog-id=229 op=UNLOAD Jan 24 12:01:15.633000 audit[4582]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c00125e000 a2=0 a3=0 items=0 ppid=4571 pid=4582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:15.633000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 24 12:01:15.995000 audit[5048]: NETFILTER_CFG table=nat:125 family=2 entries=15 op=nft_register_chain pid=5048 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 12:01:15.995000 audit[5048]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7fff1d9dfba0 a2=0 a3=7fff1d9dfb8c items=0 ppid=4582 pid=5048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:15.995000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 12:01:16.001000 audit[5050]: NETFILTER_CFG table=mangle:126 family=2 entries=16 op=nft_register_chain pid=5050 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 12:01:16.001000 audit[5050]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffe9396ca20 a2=0 a3=7ffe9396ca0c items=0 ppid=4582 pid=5050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:16.001000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 12:01:16.054000 audit[5047]: NETFILTER_CFG table=raw:127 family=2 entries=21 op=nft_register_chain pid=5047 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 12:01:16.054000 audit[5047]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffe6dfb8c10 a2=0 a3=7ffe6dfb8bfc items=0 ppid=4582 pid=5047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:16.054000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 12:01:16.069000 audit[5052]: NETFILTER_CFG table=filter:128 family=2 entries=263 op=nft_register_chain pid=5052 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 12:01:16.069000 audit[5052]: SYSCALL arch=c000003e syscall=46 success=yes exit=156020 a0=3 a1=7ffc769c74e0 a2=0 a3=7ffc769c74cc items=0 ppid=4582 pid=5052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:16.069000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 12:01:16.403880 systemd-networkd[1510]: vxlan.calico: Gained IPv6LL Jan 24 12:01:18.902376 kubelet[2876]: E0124 12:01:18.898945 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:01:18.910895 containerd[1652]: time="2026-01-24T12:01:18.908077101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-576w8,Uid:3477849f-ef62-42dc-be46-c8edc5b93ccb,Namespace:calico-system,Attempt:0,}" Jan 24 12:01:19.169268 systemd-networkd[1510]: cali7e8d29c1d1d: Link UP Jan 24 12:01:19.169691 systemd-networkd[1510]: cali7e8d29c1d1d: Gained carrier Jan 24 12:01:19.196213 containerd[1652]: 2026-01-24 12:01:19.016 [INFO][5061] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--576w8-eth0 csi-node-driver- calico-system 3477849f-ef62-42dc-be46-c8edc5b93ccb 823 0 2026-01-24 12:00:25 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-576w8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali7e8d29c1d1d [] [] }} ContainerID="094b8e9fcc8ece3e7621a58a979fffc4ec08c0bc7da4d09bd160b9fada369627" Namespace="calico-system" Pod="csi-node-driver-576w8" WorkloadEndpoint="localhost-k8s-csi--node--driver--576w8-" Jan 24 12:01:19.196213 containerd[1652]: 2026-01-24 12:01:19.016 [INFO][5061] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="094b8e9fcc8ece3e7621a58a979fffc4ec08c0bc7da4d09bd160b9fada369627" Namespace="calico-system" Pod="csi-node-driver-576w8" WorkloadEndpoint="localhost-k8s-csi--node--driver--576w8-eth0" Jan 24 12:01:19.196213 containerd[1652]: 2026-01-24 12:01:19.074 [INFO][5074] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="094b8e9fcc8ece3e7621a58a979fffc4ec08c0bc7da4d09bd160b9fada369627" HandleID="k8s-pod-network.094b8e9fcc8ece3e7621a58a979fffc4ec08c0bc7da4d09bd160b9fada369627" Workload="localhost-k8s-csi--node--driver--576w8-eth0" Jan 24 12:01:19.196213 containerd[1652]: 2026-01-24 12:01:19.075 [INFO][5074] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="094b8e9fcc8ece3e7621a58a979fffc4ec08c0bc7da4d09bd160b9fada369627" HandleID="k8s-pod-network.094b8e9fcc8ece3e7621a58a979fffc4ec08c0bc7da4d09bd160b9fada369627" Workload="localhost-k8s-csi--node--driver--576w8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c76c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-576w8", "timestamp":"2026-01-24 12:01:19.074648192 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 12:01:19.196213 containerd[1652]: 2026-01-24 12:01:19.075 [INFO][5074] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 12:01:19.196213 containerd[1652]: 2026-01-24 12:01:19.075 [INFO][5074] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 12:01:19.196213 containerd[1652]: 2026-01-24 12:01:19.075 [INFO][5074] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 12:01:19.196213 containerd[1652]: 2026-01-24 12:01:19.088 [INFO][5074] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.094b8e9fcc8ece3e7621a58a979fffc4ec08c0bc7da4d09bd160b9fada369627" host="localhost" Jan 24 12:01:19.196213 containerd[1652]: 2026-01-24 12:01:19.101 [INFO][5074] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 12:01:19.196213 containerd[1652]: 2026-01-24 12:01:19.115 [INFO][5074] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 12:01:19.196213 containerd[1652]: 2026-01-24 12:01:19.120 [INFO][5074] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 12:01:19.196213 containerd[1652]: 2026-01-24 12:01:19.124 [INFO][5074] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 12:01:19.196213 containerd[1652]: 2026-01-24 12:01:19.125 [INFO][5074] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.094b8e9fcc8ece3e7621a58a979fffc4ec08c0bc7da4d09bd160b9fada369627" host="localhost" Jan 24 12:01:19.196213 containerd[1652]: 2026-01-24 12:01:19.127 [INFO][5074] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.094b8e9fcc8ece3e7621a58a979fffc4ec08c0bc7da4d09bd160b9fada369627 Jan 24 12:01:19.196213 containerd[1652]: 2026-01-24 12:01:19.140 [INFO][5074] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.094b8e9fcc8ece3e7621a58a979fffc4ec08c0bc7da4d09bd160b9fada369627" host="localhost" Jan 24 12:01:19.196213 containerd[1652]: 2026-01-24 12:01:19.160 [INFO][5074] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.094b8e9fcc8ece3e7621a58a979fffc4ec08c0bc7da4d09bd160b9fada369627" host="localhost" Jan 24 12:01:19.196213 containerd[1652]: 2026-01-24 12:01:19.160 [INFO][5074] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.094b8e9fcc8ece3e7621a58a979fffc4ec08c0bc7da4d09bd160b9fada369627" host="localhost" Jan 24 12:01:19.196213 containerd[1652]: 2026-01-24 12:01:19.160 [INFO][5074] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 12:01:19.196213 containerd[1652]: 2026-01-24 12:01:19.160 [INFO][5074] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="094b8e9fcc8ece3e7621a58a979fffc4ec08c0bc7da4d09bd160b9fada369627" HandleID="k8s-pod-network.094b8e9fcc8ece3e7621a58a979fffc4ec08c0bc7da4d09bd160b9fada369627" Workload="localhost-k8s-csi--node--driver--576w8-eth0" Jan 24 12:01:19.199944 containerd[1652]: 2026-01-24 12:01:19.164 [INFO][5061] cni-plugin/k8s.go 418: Populated endpoint ContainerID="094b8e9fcc8ece3e7621a58a979fffc4ec08c0bc7da4d09bd160b9fada369627" Namespace="calico-system" Pod="csi-node-driver-576w8" WorkloadEndpoint="localhost-k8s-csi--node--driver--576w8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--576w8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3477849f-ef62-42dc-be46-c8edc5b93ccb", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 12, 0, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-576w8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7e8d29c1d1d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 12:01:19.199944 containerd[1652]: 2026-01-24 12:01:19.164 [INFO][5061] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="094b8e9fcc8ece3e7621a58a979fffc4ec08c0bc7da4d09bd160b9fada369627" Namespace="calico-system" Pod="csi-node-driver-576w8" WorkloadEndpoint="localhost-k8s-csi--node--driver--576w8-eth0" Jan 24 12:01:19.199944 containerd[1652]: 2026-01-24 12:01:19.164 [INFO][5061] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7e8d29c1d1d ContainerID="094b8e9fcc8ece3e7621a58a979fffc4ec08c0bc7da4d09bd160b9fada369627" Namespace="calico-system" Pod="csi-node-driver-576w8" WorkloadEndpoint="localhost-k8s-csi--node--driver--576w8-eth0" Jan 24 12:01:19.199944 containerd[1652]: 2026-01-24 12:01:19.168 [INFO][5061] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="094b8e9fcc8ece3e7621a58a979fffc4ec08c0bc7da4d09bd160b9fada369627" Namespace="calico-system" Pod="csi-node-driver-576w8" WorkloadEndpoint="localhost-k8s-csi--node--driver--576w8-eth0" Jan 24 12:01:19.199944 containerd[1652]: 2026-01-24 12:01:19.168 [INFO][5061] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="094b8e9fcc8ece3e7621a58a979fffc4ec08c0bc7da4d09bd160b9fada369627" Namespace="calico-system" Pod="csi-node-driver-576w8" WorkloadEndpoint="localhost-k8s-csi--node--driver--576w8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--576w8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3477849f-ef62-42dc-be46-c8edc5b93ccb", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 12, 0, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"094b8e9fcc8ece3e7621a58a979fffc4ec08c0bc7da4d09bd160b9fada369627", Pod:"csi-node-driver-576w8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7e8d29c1d1d", MAC:"42:50:b9:ee:31:aa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 12:01:19.199944 containerd[1652]: 2026-01-24 12:01:19.189 [INFO][5061] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="094b8e9fcc8ece3e7621a58a979fffc4ec08c0bc7da4d09bd160b9fada369627" Namespace="calico-system" Pod="csi-node-driver-576w8" WorkloadEndpoint="localhost-k8s-csi--node--driver--576w8-eth0" Jan 24 12:01:19.239653 kernel: kauditd_printk_skb: 272 callbacks suppressed Jan 24 12:01:19.239784 kernel: audit: type=1325 audit(1769256079.229:699): table=filter:129 family=2 entries=56 op=nft_register_chain pid=5092 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 12:01:19.229000 audit[5092]: NETFILTER_CFG table=filter:129 family=2 entries=56 op=nft_register_chain pid=5092 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 12:01:19.229000 audit[5092]: SYSCALL arch=c000003e syscall=46 success=yes exit=25516 a0=3 a1=7ffd7adfa120 a2=0 a3=7ffd7adfa10c items=0 ppid=4582 pid=5092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:19.254956 kernel: audit: type=1300 audit(1769256079.229:699): arch=c000003e syscall=46 success=yes exit=25516 a0=3 a1=7ffd7adfa120 a2=0 a3=7ffd7adfa10c items=0 ppid=4582 pid=5092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:19.261883 kernel: audit: type=1327 audit(1769256079.229:699): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 12:01:19.229000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 12:01:19.278076 containerd[1652]: time="2026-01-24T12:01:19.277917885Z" level=info msg="connecting to shim 094b8e9fcc8ece3e7621a58a979fffc4ec08c0bc7da4d09bd160b9fada369627" address="unix:///run/containerd/s/66a7f6514ac8ec622a7fa42ff242dcd777b8f558962ee6912a1659925e819afc" namespace=k8s.io protocol=ttrpc version=3 Jan 24 12:01:19.341869 systemd[1]: Started cri-containerd-094b8e9fcc8ece3e7621a58a979fffc4ec08c0bc7da4d09bd160b9fada369627.scope - libcontainer container 094b8e9fcc8ece3e7621a58a979fffc4ec08c0bc7da4d09bd160b9fada369627. Jan 24 12:01:19.378000 audit: BPF prog-id=239 op=LOAD Jan 24 12:01:19.385375 kernel: audit: type=1334 audit(1769256079.378:700): prog-id=239 op=LOAD Jan 24 12:01:19.385521 kernel: audit: type=1334 audit(1769256079.379:701): prog-id=240 op=LOAD Jan 24 12:01:19.379000 audit: BPF prog-id=240 op=LOAD Jan 24 12:01:19.379000 audit[5112]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5101 pid=5112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:19.386656 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 12:01:19.397750 kernel: audit: type=1300 audit(1769256079.379:701): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5101 pid=5112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:19.397851 kernel: audit: type=1327 audit(1769256079.379:701): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039346238653966636338656365336537363231613538613937396666 Jan 24 12:01:19.379000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039346238653966636338656365336537363231613538613937396666 Jan 24 12:01:19.379000 audit: BPF prog-id=240 op=UNLOAD Jan 24 12:01:19.379000 audit[5112]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5101 pid=5112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:19.426634 kernel: audit: type=1334 audit(1769256079.379:702): prog-id=240 op=UNLOAD Jan 24 12:01:19.427242 kernel: audit: type=1300 audit(1769256079.379:702): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5101 pid=5112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:19.427369 kernel: audit: type=1327 audit(1769256079.379:702): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039346238653966636338656365336537363231613538613937396666 Jan 24 12:01:19.379000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039346238653966636338656365336537363231613538613937396666 Jan 24 12:01:19.379000 audit: BPF prog-id=241 op=LOAD Jan 24 12:01:19.379000 audit[5112]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5101 pid=5112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:19.379000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039346238653966636338656365336537363231613538613937396666 Jan 24 12:01:19.380000 audit: BPF prog-id=242 op=LOAD Jan 24 12:01:19.380000 audit[5112]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5101 pid=5112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:19.380000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039346238653966636338656365336537363231613538613937396666 Jan 24 12:01:19.380000 audit: BPF prog-id=242 op=UNLOAD Jan 24 12:01:19.380000 audit[5112]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5101 pid=5112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:19.380000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039346238653966636338656365336537363231613538613937396666 Jan 24 12:01:19.380000 audit: BPF prog-id=241 op=UNLOAD Jan 24 12:01:19.380000 audit[5112]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5101 pid=5112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:19.380000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039346238653966636338656365336537363231613538613937396666 Jan 24 12:01:19.380000 audit: BPF prog-id=243 op=LOAD Jan 24 12:01:19.380000 audit[5112]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5101 pid=5112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:19.380000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039346238653966636338656365336537363231613538613937396666 Jan 24 12:01:19.450267 containerd[1652]: time="2026-01-24T12:01:19.450219971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-576w8,Uid:3477849f-ef62-42dc-be46-c8edc5b93ccb,Namespace:calico-system,Attempt:0,} returns sandbox id \"094b8e9fcc8ece3e7621a58a979fffc4ec08c0bc7da4d09bd160b9fada369627\"" Jan 24 12:01:19.453489 containerd[1652]: time="2026-01-24T12:01:19.452861756Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 12:01:19.533406 containerd[1652]: time="2026-01-24T12:01:19.532831203Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:01:19.537635 containerd[1652]: time="2026-01-24T12:01:19.537493565Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 12:01:19.537828 containerd[1652]: time="2026-01-24T12:01:19.537635501Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 24 12:01:19.538624 kubelet[2876]: E0124 12:01:19.538153 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 12:01:19.538624 kubelet[2876]: E0124 12:01:19.538501 2876 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 12:01:19.539365 kubelet[2876]: E0124 12:01:19.538945 2876 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-576w8_calico-system(3477849f-ef62-42dc-be46-c8edc5b93ccb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 12:01:19.542780 containerd[1652]: time="2026-01-24T12:01:19.542674618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 12:01:19.604845 containerd[1652]: time="2026-01-24T12:01:19.604694396Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:01:19.607610 containerd[1652]: time="2026-01-24T12:01:19.607379128Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 12:01:19.607610 containerd[1652]: time="2026-01-24T12:01:19.607503242Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 24 12:01:19.607927 kubelet[2876]: E0124 12:01:19.607860 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 12:01:19.608100 kubelet[2876]: E0124 12:01:19.607941 2876 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 12:01:19.608100 kubelet[2876]: E0124 12:01:19.608040 2876 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-576w8_calico-system(3477849f-ef62-42dc-be46-c8edc5b93ccb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 12:01:19.608208 kubelet[2876]: E0124 12:01:19.608139 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-576w8" podUID="3477849f-ef62-42dc-be46-c8edc5b93ccb" Jan 24 12:01:19.619191 kubelet[2876]: E0124 12:01:19.619049 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-576w8" podUID="3477849f-ef62-42dc-be46-c8edc5b93ccb" Jan 24 12:01:20.625016 kubelet[2876]: E0124 12:01:20.624846 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-576w8" podUID="3477849f-ef62-42dc-be46-c8edc5b93ccb" Jan 24 12:01:21.011903 systemd-networkd[1510]: cali7e8d29c1d1d: Gained IPv6LL Jan 24 12:01:22.898730 kubelet[2876]: E0124 12:01:22.896045 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:01:22.899304 containerd[1652]: time="2026-01-24T12:01:22.896853083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9mzkh,Uid:5d0d046c-db5e-4542-a5a5-0466daa13e9a,Namespace:kube-system,Attempt:0,}" Jan 24 12:01:23.149980 systemd-networkd[1510]: cali3ecdba01559: Link UP Jan 24 12:01:23.150946 systemd-networkd[1510]: cali3ecdba01559: Gained carrier Jan 24 12:01:23.177398 containerd[1652]: 2026-01-24 12:01:22.996 [INFO][5144] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--9mzkh-eth0 coredns-66bc5c9577- kube-system 5d0d046c-db5e-4542-a5a5-0466daa13e9a 948 0 2026-01-24 11:59:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-9mzkh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3ecdba01559 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="acc27cc9277ad352d494c05499870b02734d8c1a43f47aff88aeca6da00e926f" Namespace="kube-system" Pod="coredns-66bc5c9577-9mzkh" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--9mzkh-" Jan 24 12:01:23.177398 containerd[1652]: 2026-01-24 12:01:22.996 [INFO][5144] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="acc27cc9277ad352d494c05499870b02734d8c1a43f47aff88aeca6da00e926f" Namespace="kube-system" Pod="coredns-66bc5c9577-9mzkh" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--9mzkh-eth0" Jan 24 12:01:23.177398 containerd[1652]: 2026-01-24 12:01:23.046 [INFO][5159] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="acc27cc9277ad352d494c05499870b02734d8c1a43f47aff88aeca6da00e926f" HandleID="k8s-pod-network.acc27cc9277ad352d494c05499870b02734d8c1a43f47aff88aeca6da00e926f" Workload="localhost-k8s-coredns--66bc5c9577--9mzkh-eth0" Jan 24 12:01:23.177398 containerd[1652]: 2026-01-24 12:01:23.046 [INFO][5159] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="acc27cc9277ad352d494c05499870b02734d8c1a43f47aff88aeca6da00e926f" HandleID="k8s-pod-network.acc27cc9277ad352d494c05499870b02734d8c1a43f47aff88aeca6da00e926f" Workload="localhost-k8s-coredns--66bc5c9577--9mzkh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00021e840), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-9mzkh", "timestamp":"2026-01-24 12:01:23.046124486 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 12:01:23.177398 containerd[1652]: 2026-01-24 12:01:23.047 [INFO][5159] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 12:01:23.177398 containerd[1652]: 2026-01-24 12:01:23.047 [INFO][5159] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 12:01:23.177398 containerd[1652]: 2026-01-24 12:01:23.047 [INFO][5159] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 12:01:23.177398 containerd[1652]: 2026-01-24 12:01:23.064 [INFO][5159] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.acc27cc9277ad352d494c05499870b02734d8c1a43f47aff88aeca6da00e926f" host="localhost" Jan 24 12:01:23.177398 containerd[1652]: 2026-01-24 12:01:23.073 [INFO][5159] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 12:01:23.177398 containerd[1652]: 2026-01-24 12:01:23.091 [INFO][5159] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 12:01:23.177398 containerd[1652]: 2026-01-24 12:01:23.095 [INFO][5159] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 12:01:23.177398 containerd[1652]: 2026-01-24 12:01:23.101 [INFO][5159] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 12:01:23.177398 containerd[1652]: 2026-01-24 12:01:23.101 [INFO][5159] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.acc27cc9277ad352d494c05499870b02734d8c1a43f47aff88aeca6da00e926f" host="localhost" Jan 24 12:01:23.177398 containerd[1652]: 2026-01-24 12:01:23.105 [INFO][5159] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.acc27cc9277ad352d494c05499870b02734d8c1a43f47aff88aeca6da00e926f Jan 24 12:01:23.177398 containerd[1652]: 2026-01-24 12:01:23.114 [INFO][5159] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.acc27cc9277ad352d494c05499870b02734d8c1a43f47aff88aeca6da00e926f" host="localhost" Jan 24 12:01:23.177398 containerd[1652]: 2026-01-24 12:01:23.139 [INFO][5159] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.acc27cc9277ad352d494c05499870b02734d8c1a43f47aff88aeca6da00e926f" host="localhost" Jan 24 12:01:23.177398 containerd[1652]: 2026-01-24 12:01:23.139 [INFO][5159] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.acc27cc9277ad352d494c05499870b02734d8c1a43f47aff88aeca6da00e926f" host="localhost" Jan 24 12:01:23.177398 containerd[1652]: 2026-01-24 12:01:23.139 [INFO][5159] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 12:01:23.177398 containerd[1652]: 2026-01-24 12:01:23.139 [INFO][5159] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="acc27cc9277ad352d494c05499870b02734d8c1a43f47aff88aeca6da00e926f" HandleID="k8s-pod-network.acc27cc9277ad352d494c05499870b02734d8c1a43f47aff88aeca6da00e926f" Workload="localhost-k8s-coredns--66bc5c9577--9mzkh-eth0" Jan 24 12:01:23.179281 containerd[1652]: 2026-01-24 12:01:23.144 [INFO][5144] cni-plugin/k8s.go 418: Populated endpoint ContainerID="acc27cc9277ad352d494c05499870b02734d8c1a43f47aff88aeca6da00e926f" Namespace="kube-system" Pod="coredns-66bc5c9577-9mzkh" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--9mzkh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--9mzkh-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"5d0d046c-db5e-4542-a5a5-0466daa13e9a", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 11, 59, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-9mzkh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3ecdba01559", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 12:01:23.179281 containerd[1652]: 2026-01-24 12:01:23.144 [INFO][5144] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="acc27cc9277ad352d494c05499870b02734d8c1a43f47aff88aeca6da00e926f" Namespace="kube-system" Pod="coredns-66bc5c9577-9mzkh" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--9mzkh-eth0" Jan 24 12:01:23.179281 containerd[1652]: 2026-01-24 12:01:23.144 [INFO][5144] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3ecdba01559 ContainerID="acc27cc9277ad352d494c05499870b02734d8c1a43f47aff88aeca6da00e926f" Namespace="kube-system" Pod="coredns-66bc5c9577-9mzkh" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--9mzkh-eth0" Jan 24 12:01:23.179281 containerd[1652]: 2026-01-24 12:01:23.151 [INFO][5144] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="acc27cc9277ad352d494c05499870b02734d8c1a43f47aff88aeca6da00e926f" Namespace="kube-system" Pod="coredns-66bc5c9577-9mzkh" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--9mzkh-eth0" Jan 24 12:01:23.179281 containerd[1652]: 2026-01-24 12:01:23.151 [INFO][5144] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="acc27cc9277ad352d494c05499870b02734d8c1a43f47aff88aeca6da00e926f" Namespace="kube-system" Pod="coredns-66bc5c9577-9mzkh" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--9mzkh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--9mzkh-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"5d0d046c-db5e-4542-a5a5-0466daa13e9a", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 11, 59, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"acc27cc9277ad352d494c05499870b02734d8c1a43f47aff88aeca6da00e926f", Pod:"coredns-66bc5c9577-9mzkh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3ecdba01559", MAC:"ce:07:10:fa:74:6b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 12:01:23.179281 containerd[1652]: 2026-01-24 12:01:23.171 [INFO][5144] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="acc27cc9277ad352d494c05499870b02734d8c1a43f47aff88aeca6da00e926f" Namespace="kube-system" Pod="coredns-66bc5c9577-9mzkh" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--9mzkh-eth0" Jan 24 12:01:23.210000 audit[5178]: NETFILTER_CFG table=filter:130 family=2 entries=56 op=nft_register_chain pid=5178 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 12:01:23.210000 audit[5178]: SYSCALL arch=c000003e syscall=46 success=yes exit=25096 a0=3 a1=7ffeeee268a0 a2=0 a3=7ffeeee2688c items=0 ppid=4582 pid=5178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:23.210000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 12:01:23.235160 containerd[1652]: time="2026-01-24T12:01:23.235067866Z" level=info msg="connecting to shim acc27cc9277ad352d494c05499870b02734d8c1a43f47aff88aeca6da00e926f" address="unix:///run/containerd/s/75056c38cdfdd4b1f66ad2550f4e7576172e8223b4640ec16c5683393cf0cd57" namespace=k8s.io protocol=ttrpc version=3 Jan 24 12:01:23.292854 systemd[1]: Started cri-containerd-acc27cc9277ad352d494c05499870b02734d8c1a43f47aff88aeca6da00e926f.scope - libcontainer container acc27cc9277ad352d494c05499870b02734d8c1a43f47aff88aeca6da00e926f. Jan 24 12:01:23.318000 audit: BPF prog-id=244 op=LOAD Jan 24 12:01:23.319000 audit: BPF prog-id=245 op=LOAD Jan 24 12:01:23.319000 audit[5197]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=5187 pid=5197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:23.319000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163633237636339323737616433353264343934633035343939383730 Jan 24 12:01:23.319000 audit: BPF prog-id=245 op=UNLOAD Jan 24 12:01:23.319000 audit[5197]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5187 pid=5197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:23.319000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163633237636339323737616433353264343934633035343939383730 Jan 24 12:01:23.320000 audit: BPF prog-id=246 op=LOAD Jan 24 12:01:23.320000 audit[5197]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=5187 pid=5197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:23.320000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163633237636339323737616433353264343934633035343939383730 Jan 24 12:01:23.320000 audit: BPF prog-id=247 op=LOAD Jan 24 12:01:23.320000 audit[5197]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=5187 pid=5197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:23.320000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163633237636339323737616433353264343934633035343939383730 Jan 24 12:01:23.320000 audit: BPF prog-id=247 op=UNLOAD Jan 24 12:01:23.320000 audit[5197]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5187 pid=5197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:23.320000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163633237636339323737616433353264343934633035343939383730 Jan 24 12:01:23.320000 audit: BPF prog-id=246 op=UNLOAD Jan 24 12:01:23.320000 audit[5197]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5187 pid=5197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:23.320000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163633237636339323737616433353264343934633035343939383730 Jan 24 12:01:23.320000 audit: BPF prog-id=248 op=LOAD Jan 24 12:01:23.320000 audit[5197]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=5187 pid=5197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:23.320000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163633237636339323737616433353264343934633035343939383730 Jan 24 12:01:23.323921 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 12:01:23.383607 containerd[1652]: time="2026-01-24T12:01:23.383186992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9mzkh,Uid:5d0d046c-db5e-4542-a5a5-0466daa13e9a,Namespace:kube-system,Attempt:0,} returns sandbox id \"acc27cc9277ad352d494c05499870b02734d8c1a43f47aff88aeca6da00e926f\"" Jan 24 12:01:23.386408 kubelet[2876]: E0124 12:01:23.384509 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:01:23.395387 containerd[1652]: time="2026-01-24T12:01:23.395316407Z" level=info msg="CreateContainer within sandbox \"acc27cc9277ad352d494c05499870b02734d8c1a43f47aff88aeca6da00e926f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 12:01:23.411693 containerd[1652]: time="2026-01-24T12:01:23.410806015Z" level=info msg="Container 5e8057346db4a75ff4195915a0a873b01e08e91cc62ec3c43448d068d0eb92cc: CDI devices from CRI Config.CDIDevices: []" Jan 24 12:01:23.423113 containerd[1652]: time="2026-01-24T12:01:23.423034181Z" level=info msg="CreateContainer within sandbox \"acc27cc9277ad352d494c05499870b02734d8c1a43f47aff88aeca6da00e926f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5e8057346db4a75ff4195915a0a873b01e08e91cc62ec3c43448d068d0eb92cc\"" Jan 24 12:01:23.423910 containerd[1652]: time="2026-01-24T12:01:23.423857876Z" level=info msg="StartContainer for \"5e8057346db4a75ff4195915a0a873b01e08e91cc62ec3c43448d068d0eb92cc\"" Jan 24 12:01:23.426789 containerd[1652]: time="2026-01-24T12:01:23.425400285Z" level=info msg="connecting to shim 5e8057346db4a75ff4195915a0a873b01e08e91cc62ec3c43448d068d0eb92cc" address="unix:///run/containerd/s/75056c38cdfdd4b1f66ad2550f4e7576172e8223b4640ec16c5683393cf0cd57" protocol=ttrpc version=3 Jan 24 12:01:23.461946 systemd[1]: Started cri-containerd-5e8057346db4a75ff4195915a0a873b01e08e91cc62ec3c43448d068d0eb92cc.scope - libcontainer container 5e8057346db4a75ff4195915a0a873b01e08e91cc62ec3c43448d068d0eb92cc. Jan 24 12:01:23.488000 audit: BPF prog-id=249 op=LOAD Jan 24 12:01:23.489000 audit: BPF prog-id=250 op=LOAD Jan 24 12:01:23.489000 audit[5224]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=5187 pid=5224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:23.489000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565383035373334366462346137356666343139353931356130613837 Jan 24 12:01:23.489000 audit: BPF prog-id=250 op=UNLOAD Jan 24 12:01:23.489000 audit[5224]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5187 pid=5224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:23.489000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565383035373334366462346137356666343139353931356130613837 Jan 24 12:01:23.489000 audit: BPF prog-id=251 op=LOAD Jan 24 12:01:23.489000 audit[5224]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=5187 pid=5224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:23.489000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565383035373334366462346137356666343139353931356130613837 Jan 24 12:01:23.490000 audit: BPF prog-id=252 op=LOAD Jan 24 12:01:23.490000 audit[5224]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=5187 pid=5224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:23.490000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565383035373334366462346137356666343139353931356130613837 Jan 24 12:01:23.490000 audit: BPF prog-id=252 op=UNLOAD Jan 24 12:01:23.490000 audit[5224]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5187 pid=5224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:23.490000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565383035373334366462346137356666343139353931356130613837 Jan 24 12:01:23.490000 audit: BPF prog-id=251 op=UNLOAD Jan 24 12:01:23.490000 audit[5224]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5187 pid=5224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:23.490000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565383035373334366462346137356666343139353931356130613837 Jan 24 12:01:23.490000 audit: BPF prog-id=253 op=LOAD Jan 24 12:01:23.490000 audit[5224]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=5187 pid=5224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:23.490000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565383035373334366462346137356666343139353931356130613837 Jan 24 12:01:23.541481 containerd[1652]: time="2026-01-24T12:01:23.541402200Z" level=info msg="StartContainer for \"5e8057346db4a75ff4195915a0a873b01e08e91cc62ec3c43448d068d0eb92cc\" returns successfully" Jan 24 12:01:23.636604 kubelet[2876]: E0124 12:01:23.636365 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:01:23.658284 kubelet[2876]: I0124 12:01:23.658135 2876 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-9mzkh" podStartSLOduration=84.658116948 podStartE2EDuration="1m24.658116948s" podCreationTimestamp="2026-01-24 11:59:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 12:01:23.657134501 +0000 UTC m=+90.499346661" watchObservedRunningTime="2026-01-24 12:01:23.658116948 +0000 UTC m=+90.500329107" Jan 24 12:01:23.702000 audit[5260]: NETFILTER_CFG table=filter:131 family=2 entries=14 op=nft_register_rule pid=5260 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:01:23.702000 audit[5260]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd30c18100 a2=0 a3=7ffd30c180ec items=0 ppid=3041 pid=5260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:23.702000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:01:23.713000 audit[5260]: NETFILTER_CFG table=nat:132 family=2 entries=44 op=nft_register_rule pid=5260 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:01:23.713000 audit[5260]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffd30c18100 a2=0 a3=7ffd30c180ec items=0 ppid=3041 pid=5260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:23.713000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:01:23.889060 kubelet[2876]: E0124 12:01:23.888203 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:01:23.892192 containerd[1652]: time="2026-01-24T12:01:23.892061617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 12:01:23.966815 containerd[1652]: time="2026-01-24T12:01:23.966505298Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:01:23.968881 containerd[1652]: time="2026-01-24T12:01:23.968824167Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 12:01:23.968881 containerd[1652]: time="2026-01-24T12:01:23.968878672Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 24 12:01:23.969364 kubelet[2876]: E0124 12:01:23.969285 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 12:01:23.969923 kubelet[2876]: E0124 12:01:23.969406 2876 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 12:01:23.969923 kubelet[2876]: E0124 12:01:23.969494 2876 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-869c797fbb-hltgx_calico-system(ec18082a-49e1-4173-9b94-153e655a0861): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 12:01:23.971641 containerd[1652]: time="2026-01-24T12:01:23.971504406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 12:01:24.029931 containerd[1652]: time="2026-01-24T12:01:24.029694505Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:01:24.031464 containerd[1652]: time="2026-01-24T12:01:24.031353956Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 12:01:24.031605 containerd[1652]: time="2026-01-24T12:01:24.031478183Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 24 12:01:24.031861 kubelet[2876]: E0124 12:01:24.031784 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 12:01:24.031861 kubelet[2876]: E0124 12:01:24.031858 2876 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 12:01:24.032173 kubelet[2876]: E0124 12:01:24.031937 2876 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-869c797fbb-hltgx_calico-system(ec18082a-49e1-4173-9b94-153e655a0861): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 12:01:24.032312 kubelet[2876]: E0124 12:01:24.032203 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-869c797fbb-hltgx" podUID="ec18082a-49e1-4173-9b94-153e655a0861" Jan 24 12:01:24.639399 kubelet[2876]: E0124 12:01:24.639276 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:01:24.743000 audit[5262]: NETFILTER_CFG table=filter:133 family=2 entries=14 op=nft_register_rule pid=5262 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:01:24.752435 kernel: kauditd_printk_skb: 68 callbacks suppressed Jan 24 12:01:24.752622 kernel: audit: type=1325 audit(1769256084.743:727): table=filter:133 family=2 entries=14 op=nft_register_rule pid=5262 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:01:24.743000 audit[5262]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff0d3aeda0 a2=0 a3=7fff0d3aed8c items=0 ppid=3041 pid=5262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:24.743000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:01:24.772802 kernel: audit: type=1300 audit(1769256084.743:727): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff0d3aeda0 a2=0 a3=7fff0d3aed8c items=0 ppid=3041 pid=5262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:24.772936 kernel: audit: type=1327 audit(1769256084.743:727): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:01:24.784000 audit[5262]: NETFILTER_CFG table=nat:134 family=2 entries=56 op=nft_register_chain pid=5262 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:01:24.788088 systemd-networkd[1510]: cali3ecdba01559: Gained IPv6LL Jan 24 12:01:24.784000 audit[5262]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fff0d3aeda0 a2=0 a3=7fff0d3aed8c items=0 ppid=3041 pid=5262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:24.784000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:01:24.804732 kernel: audit: type=1325 audit(1769256084.784:728): table=nat:134 family=2 entries=56 op=nft_register_chain pid=5262 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:01:24.804821 kernel: audit: type=1300 audit(1769256084.784:728): arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fff0d3aeda0 a2=0 a3=7fff0d3aed8c items=0 ppid=3041 pid=5262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:01:24.804854 kernel: audit: type=1327 audit(1769256084.784:728): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:01:25.890970 containerd[1652]: time="2026-01-24T12:01:25.890917990Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 12:01:25.965445 containerd[1652]: time="2026-01-24T12:01:25.965249131Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:01:25.981651 containerd[1652]: time="2026-01-24T12:01:25.981510214Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 12:01:25.981997 containerd[1652]: time="2026-01-24T12:01:25.981910435Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 24 12:01:25.982401 kubelet[2876]: E0124 12:01:25.982334 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 12:01:25.983860 kubelet[2876]: E0124 12:01:25.982419 2876 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 12:01:25.983860 kubelet[2876]: E0124 12:01:25.983062 2876 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-k2xcd_calico-system(f944553c-3de6-4dea-af30-6e177d6839ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 12:01:25.983860 kubelet[2876]: E0124 12:01:25.983122 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-k2xcd" podUID="f944553c-3de6-4dea-af30-6e177d6839ad" Jan 24 12:01:25.984479 containerd[1652]: time="2026-01-24T12:01:25.984452754Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 12:01:26.068077 containerd[1652]: time="2026-01-24T12:01:26.067886556Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:01:26.071068 containerd[1652]: time="2026-01-24T12:01:26.070981675Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 12:01:26.071219 containerd[1652]: time="2026-01-24T12:01:26.071106222Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 12:01:26.071486 kubelet[2876]: E0124 12:01:26.071396 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 12:01:26.071486 kubelet[2876]: E0124 12:01:26.071470 2876 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 12:01:26.071952 kubelet[2876]: E0124 12:01:26.071799 2876 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-76997bfb4b-ggwxc_calico-apiserver(543ea964-5bd2-4a2a-be7e-5b64397ea1f6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 12:01:26.071952 kubelet[2876]: E0124 12:01:26.071911 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76997bfb4b-ggwxc" podUID="543ea964-5bd2-4a2a-be7e-5b64397ea1f6" Jan 24 12:01:26.072062 containerd[1652]: time="2026-01-24T12:01:26.072009623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 12:01:26.145068 containerd[1652]: time="2026-01-24T12:01:26.144872717Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:01:26.149274 containerd[1652]: time="2026-01-24T12:01:26.148861995Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 12:01:26.149274 containerd[1652]: time="2026-01-24T12:01:26.148994007Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 12:01:26.149485 kubelet[2876]: E0124 12:01:26.149373 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 12:01:26.149485 kubelet[2876]: E0124 12:01:26.149476 2876 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 12:01:26.149931 kubelet[2876]: E0124 12:01:26.149635 2876 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-76997bfb4b-55f79_calico-apiserver(7e54efbd-6a62-4db3-8b3c-99aa330f72d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 12:01:26.149931 kubelet[2876]: E0124 12:01:26.149735 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76997bfb4b-55f79" podUID="7e54efbd-6a62-4db3-8b3c-99aa330f72d1" Jan 24 12:01:27.894457 containerd[1652]: time="2026-01-24T12:01:27.894189594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 12:01:28.008990 containerd[1652]: time="2026-01-24T12:01:28.007919457Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:01:28.011530 containerd[1652]: time="2026-01-24T12:01:28.010946639Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 12:01:28.011530 containerd[1652]: time="2026-01-24T12:01:28.011094159Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 24 12:01:28.012144 kubelet[2876]: E0124 12:01:28.011535 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 12:01:28.012144 kubelet[2876]: E0124 12:01:28.011728 2876 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 12:01:28.013035 kubelet[2876]: E0124 12:01:28.012269 2876 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5d9dddf448-n9r2d_calico-system(dd704f6f-5a9f-42a8-93d9-5d24176bfd82): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 12:01:28.013035 kubelet[2876]: E0124 12:01:28.012328 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d9dddf448-n9r2d" podUID="dd704f6f-5a9f-42a8-93d9-5d24176bfd82" Jan 24 12:01:31.894401 kubelet[2876]: E0124 12:01:31.894268 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:01:33.895331 kubelet[2876]: E0124 12:01:33.894747 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:01:34.890912 containerd[1652]: time="2026-01-24T12:01:34.890611742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 12:01:34.965712 containerd[1652]: time="2026-01-24T12:01:34.965380841Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:01:34.969215 containerd[1652]: time="2026-01-24T12:01:34.969174050Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 24 12:01:34.970035 containerd[1652]: time="2026-01-24T12:01:34.969894281Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 12:01:34.970313 kubelet[2876]: E0124 12:01:34.970228 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 12:01:34.970313 kubelet[2876]: E0124 12:01:34.970308 2876 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 12:01:34.971103 kubelet[2876]: E0124 12:01:34.970428 2876 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-576w8_calico-system(3477849f-ef62-42dc-be46-c8edc5b93ccb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 12:01:34.975112 containerd[1652]: time="2026-01-24T12:01:34.974923780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 12:01:35.042443 containerd[1652]: time="2026-01-24T12:01:35.041306220Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:01:35.045004 containerd[1652]: time="2026-01-24T12:01:35.044921265Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 12:01:35.045195 containerd[1652]: time="2026-01-24T12:01:35.045050314Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 24 12:01:35.045736 kubelet[2876]: E0124 12:01:35.045479 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 12:01:35.045736 kubelet[2876]: E0124 12:01:35.045687 2876 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 12:01:35.045877 kubelet[2876]: E0124 12:01:35.045797 2876 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-576w8_calico-system(3477849f-ef62-42dc-be46-c8edc5b93ccb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 12:01:35.045877 kubelet[2876]: E0124 12:01:35.045855 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-576w8" podUID="3477849f-ef62-42dc-be46-c8edc5b93ccb" Jan 24 12:01:35.997246 kubelet[2876]: E0124 12:01:35.992833 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-869c797fbb-hltgx" podUID="ec18082a-49e1-4173-9b94-153e655a0861" Jan 24 12:01:36.928835 kubelet[2876]: E0124 12:01:36.928204 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76997bfb4b-ggwxc" podUID="543ea964-5bd2-4a2a-be7e-5b64397ea1f6" Jan 24 12:01:39.932884 kubelet[2876]: E0124 12:01:39.932276 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-k2xcd" podUID="f944553c-3de6-4dea-af30-6e177d6839ad" Jan 24 12:01:40.031954 kubelet[2876]: E0124 12:01:40.031774 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:01:40.904613 kubelet[2876]: E0124 12:01:40.904176 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76997bfb4b-55f79" podUID="7e54efbd-6a62-4db3-8b3c-99aa330f72d1" Jan 24 12:01:40.906946 kubelet[2876]: E0124 12:01:40.906498 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d9dddf448-n9r2d" podUID="dd704f6f-5a9f-42a8-93d9-5d24176bfd82" Jan 24 12:01:42.973016 kubelet[2876]: E0124 12:01:42.965664 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:01:47.934617 kubelet[2876]: E0124 12:01:47.934470 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-576w8" podUID="3477849f-ef62-42dc-be46-c8edc5b93ccb" Jan 24 12:01:48.909342 containerd[1652]: time="2026-01-24T12:01:48.909220911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 12:01:49.071976 containerd[1652]: time="2026-01-24T12:01:49.071679301Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:01:49.077633 containerd[1652]: time="2026-01-24T12:01:49.075629573Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 12:01:49.077633 containerd[1652]: time="2026-01-24T12:01:49.075754621Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 24 12:01:49.078002 kubelet[2876]: E0124 12:01:49.076243 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 12:01:49.078002 kubelet[2876]: E0124 12:01:49.076314 2876 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 12:01:49.078002 kubelet[2876]: E0124 12:01:49.076816 2876 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-869c797fbb-hltgx_calico-system(ec18082a-49e1-4173-9b94-153e655a0861): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 12:01:49.082633 containerd[1652]: time="2026-01-24T12:01:49.082496688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 12:01:49.206411 containerd[1652]: time="2026-01-24T12:01:49.202723850Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:01:49.206411 containerd[1652]: time="2026-01-24T12:01:49.205735056Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 12:01:49.206411 containerd[1652]: time="2026-01-24T12:01:49.206056087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 12:01:49.211358 kubelet[2876]: E0124 12:01:49.208402 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 12:01:49.211358 kubelet[2876]: E0124 12:01:49.208458 2876 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 12:01:49.211358 kubelet[2876]: E0124 12:01:49.209487 2876 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-76997bfb4b-ggwxc_calico-apiserver(543ea964-5bd2-4a2a-be7e-5b64397ea1f6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 12:01:49.211358 kubelet[2876]: E0124 12:01:49.209539 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76997bfb4b-ggwxc" podUID="543ea964-5bd2-4a2a-be7e-5b64397ea1f6" Jan 24 12:01:49.227606 containerd[1652]: time="2026-01-24T12:01:49.212532978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 12:01:49.313846 containerd[1652]: time="2026-01-24T12:01:49.310816708Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:01:49.331518 containerd[1652]: time="2026-01-24T12:01:49.329495078Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 12:01:49.331518 containerd[1652]: time="2026-01-24T12:01:49.329784969Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 24 12:01:49.331906 kubelet[2876]: E0124 12:01:49.330299 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 12:01:49.331906 kubelet[2876]: E0124 12:01:49.330446 2876 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 12:01:49.331906 kubelet[2876]: E0124 12:01:49.330947 2876 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-869c797fbb-hltgx_calico-system(ec18082a-49e1-4173-9b94-153e655a0861): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 12:01:49.331906 kubelet[2876]: E0124 12:01:49.331092 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-869c797fbb-hltgx" podUID="ec18082a-49e1-4173-9b94-153e655a0861" Jan 24 12:01:50.898869 containerd[1652]: time="2026-01-24T12:01:50.898355068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 12:01:51.033343 containerd[1652]: time="2026-01-24T12:01:51.033109324Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:01:51.038528 containerd[1652]: time="2026-01-24T12:01:51.038462594Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 24 12:01:51.038801 containerd[1652]: time="2026-01-24T12:01:51.038650943Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 12:01:51.042230 kubelet[2876]: E0124 12:01:51.040327 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 12:01:51.042230 kubelet[2876]: E0124 12:01:51.040419 2876 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 12:01:51.042230 kubelet[2876]: E0124 12:01:51.040519 2876 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-k2xcd_calico-system(f944553c-3de6-4dea-af30-6e177d6839ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 12:01:51.042230 kubelet[2876]: E0124 12:01:51.040614 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-k2xcd" podUID="f944553c-3de6-4dea-af30-6e177d6839ad" Jan 24 12:01:54.897254 containerd[1652]: time="2026-01-24T12:01:54.894024007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 12:01:55.099177 containerd[1652]: time="2026-01-24T12:01:55.098776651Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:01:55.109249 containerd[1652]: time="2026-01-24T12:01:55.109190258Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 24 12:01:55.132603 containerd[1652]: time="2026-01-24T12:01:55.132304601Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 12:01:55.133594 kubelet[2876]: E0124 12:01:55.133378 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 12:01:55.135846 kubelet[2876]: E0124 12:01:55.134282 2876 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 12:01:55.136163 kubelet[2876]: E0124 12:01:55.136128 2876 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5d9dddf448-n9r2d_calico-system(dd704f6f-5a9f-42a8-93d9-5d24176bfd82): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 12:01:55.136958 kubelet[2876]: E0124 12:01:55.136905 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d9dddf448-n9r2d" podUID="dd704f6f-5a9f-42a8-93d9-5d24176bfd82" Jan 24 12:01:55.137215 containerd[1652]: time="2026-01-24T12:01:55.137162047Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 12:01:55.338234 containerd[1652]: time="2026-01-24T12:01:55.337759749Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:01:55.345281 containerd[1652]: time="2026-01-24T12:01:55.342841284Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 12:01:55.345281 containerd[1652]: time="2026-01-24T12:01:55.342897190Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 12:01:55.345528 kubelet[2876]: E0124 12:01:55.343291 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 12:01:55.345528 kubelet[2876]: E0124 12:01:55.343344 2876 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 12:01:55.345528 kubelet[2876]: E0124 12:01:55.343531 2876 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-76997bfb4b-55f79_calico-apiserver(7e54efbd-6a62-4db3-8b3c-99aa330f72d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 12:01:55.345528 kubelet[2876]: E0124 12:01:55.343640 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76997bfb4b-55f79" podUID="7e54efbd-6a62-4db3-8b3c-99aa330f72d1" Jan 24 12:01:59.891597 containerd[1652]: time="2026-01-24T12:01:59.891328253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 12:01:59.973656 containerd[1652]: time="2026-01-24T12:01:59.973471279Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:01:59.981595 containerd[1652]: time="2026-01-24T12:01:59.980025570Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 12:01:59.981595 containerd[1652]: time="2026-01-24T12:01:59.980189785Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 24 12:01:59.981785 kubelet[2876]: E0124 12:01:59.980429 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 12:01:59.981785 kubelet[2876]: E0124 12:01:59.980484 2876 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 12:01:59.981785 kubelet[2876]: E0124 12:01:59.980912 2876 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-576w8_calico-system(3477849f-ef62-42dc-be46-c8edc5b93ccb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 12:01:59.988722 containerd[1652]: time="2026-01-24T12:01:59.984437732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 12:02:00.048024 containerd[1652]: time="2026-01-24T12:02:00.047776914Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:02:00.053329 containerd[1652]: time="2026-01-24T12:02:00.053277858Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 12:02:00.053674 containerd[1652]: time="2026-01-24T12:02:00.053334497Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 24 12:02:00.054472 kubelet[2876]: E0124 12:02:00.054098 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 12:02:00.054472 kubelet[2876]: E0124 12:02:00.054174 2876 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 12:02:00.054472 kubelet[2876]: E0124 12:02:00.054292 2876 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-576w8_calico-system(3477849f-ef62-42dc-be46-c8edc5b93ccb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 12:02:00.054472 kubelet[2876]: E0124 12:02:00.054357 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-576w8" podUID="3477849f-ef62-42dc-be46-c8edc5b93ccb" Jan 24 12:02:00.890788 kubelet[2876]: E0124 12:02:00.890611 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76997bfb4b-ggwxc" podUID="543ea964-5bd2-4a2a-be7e-5b64397ea1f6" Jan 24 12:02:01.892212 kubelet[2876]: E0124 12:02:01.891917 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-k2xcd" podUID="f944553c-3de6-4dea-af30-6e177d6839ad" Jan 24 12:02:03.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.100:22-10.0.0.1:49774 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:02:03.807627 systemd[1]: Started sshd@7-10.0.0.100:22-10.0.0.1:49774.service - OpenSSH per-connection server daemon (10.0.0.1:49774). Jan 24 12:02:03.821616 kernel: audit: type=1130 audit(1769256123.807:729): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.100:22-10.0.0.1:49774 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:02:03.891479 kubelet[2876]: E0124 12:02:03.891361 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-869c797fbb-hltgx" podUID="ec18082a-49e1-4173-9b94-153e655a0861" Jan 24 12:02:04.088000 audit[5332]: USER_ACCT pid=5332 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:04.096930 sshd[5332]: Accepted publickey for core from 10.0.0.1 port 49774 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 12:02:04.102801 sshd-session[5332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 12:02:04.091000 audit[5332]: CRED_ACQ pid=5332 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:04.134993 kernel: audit: type=1101 audit(1769256124.088:730): pid=5332 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:04.135206 kernel: audit: type=1103 audit(1769256124.091:731): pid=5332 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:04.140420 systemd-logind[1619]: New session 9 of user core. Jan 24 12:02:04.155695 kernel: audit: type=1006 audit(1769256124.091:732): pid=5332 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 24 12:02:04.158032 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 24 12:02:04.091000 audit[5332]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd405a7f80 a2=3 a3=0 items=0 ppid=1 pid=5332 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:02:04.177837 kernel: audit: type=1300 audit(1769256124.091:732): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd405a7f80 a2=3 a3=0 items=0 ppid=1 pid=5332 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:02:04.091000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:02:04.170000 audit[5332]: USER_START pid=5332 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:04.221677 kernel: audit: type=1327 audit(1769256124.091:732): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:02:04.221826 kernel: audit: type=1105 audit(1769256124.170:733): pid=5332 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:04.170000 audit[5336]: CRED_ACQ pid=5336 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:04.234905 kernel: audit: type=1103 audit(1769256124.170:734): pid=5336 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:04.539939 sshd[5336]: Connection closed by 10.0.0.1 port 49774 Jan 24 12:02:04.541852 sshd-session[5332]: pam_unix(sshd:session): session closed for user core Jan 24 12:02:04.569316 kernel: audit: type=1106 audit(1769256124.547:735): pid=5332 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:04.547000 audit[5332]: USER_END pid=5332 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:04.552651 systemd[1]: sshd@7-10.0.0.100:22-10.0.0.1:49774.service: Deactivated successfully. Jan 24 12:02:04.556512 systemd[1]: session-9.scope: Deactivated successfully. Jan 24 12:02:04.570500 systemd-logind[1619]: Session 9 logged out. Waiting for processes to exit. Jan 24 12:02:04.547000 audit[5332]: CRED_DISP pid=5332 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:04.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.100:22-10.0.0.1:49774 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:02:04.573632 systemd-logind[1619]: Removed session 9. Jan 24 12:02:04.584294 kernel: audit: type=1104 audit(1769256124.547:736): pid=5332 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:08.897170 kubelet[2876]: E0124 12:02:08.894825 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76997bfb4b-55f79" podUID="7e54efbd-6a62-4db3-8b3c-99aa330f72d1" Jan 24 12:02:08.897170 kubelet[2876]: E0124 12:02:08.896053 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d9dddf448-n9r2d" podUID="dd704f6f-5a9f-42a8-93d9-5d24176bfd82" Jan 24 12:02:09.578409 systemd[1]: Started sshd@8-10.0.0.100:22-10.0.0.1:49784.service - OpenSSH per-connection server daemon (10.0.0.1:49784). Jan 24 12:02:09.587795 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 12:02:09.587992 kernel: audit: type=1130 audit(1769256129.578:738): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.100:22-10.0.0.1:49784 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:02:09.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.100:22-10.0.0.1:49784 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:02:09.857081 sshd[5386]: Accepted publickey for core from 10.0.0.1 port 49784 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 12:02:09.855000 audit[5386]: USER_ACCT pid=5386 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:09.860887 sshd-session[5386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 12:02:09.883667 systemd-logind[1619]: New session 10 of user core. Jan 24 12:02:09.887156 kernel: audit: type=1101 audit(1769256129.855:739): pid=5386 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:09.887233 kernel: audit: type=1103 audit(1769256129.858:740): pid=5386 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:09.858000 audit[5386]: CRED_ACQ pid=5386 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:09.906205 kernel: audit: type=1006 audit(1769256129.858:741): pid=5386 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 24 12:02:09.906327 kernel: audit: type=1300 audit(1769256129.858:741): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffef5c94b50 a2=3 a3=0 items=0 ppid=1 pid=5386 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:02:09.858000 audit[5386]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffef5c94b50 a2=3 a3=0 items=0 ppid=1 pid=5386 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:02:09.939084 kernel: audit: type=1327 audit(1769256129.858:741): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:02:09.858000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:02:09.954481 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 24 12:02:09.969000 audit[5386]: USER_START pid=5386 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:09.979000 audit[5392]: CRED_ACQ pid=5392 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:10.002659 kernel: audit: type=1105 audit(1769256129.969:742): pid=5386 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:10.002801 kernel: audit: type=1103 audit(1769256129.979:743): pid=5392 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:10.273737 sshd[5392]: Connection closed by 10.0.0.1 port 49784 Jan 24 12:02:10.274857 sshd-session[5386]: pam_unix(sshd:session): session closed for user core Jan 24 12:02:10.277000 audit[5386]: USER_END pid=5386 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:10.284474 systemd-logind[1619]: Session 10 logged out. Waiting for processes to exit. Jan 24 12:02:10.286255 systemd[1]: sshd@8-10.0.0.100:22-10.0.0.1:49784.service: Deactivated successfully. Jan 24 12:02:10.292501 systemd[1]: session-10.scope: Deactivated successfully. Jan 24 12:02:10.295650 kernel: audit: type=1106 audit(1769256130.277:744): pid=5386 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:10.302105 systemd-logind[1619]: Removed session 10. Jan 24 12:02:10.277000 audit[5386]: CRED_DISP pid=5386 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:10.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.100:22-10.0.0.1:49784 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:02:10.318757 kernel: audit: type=1104 audit(1769256130.277:745): pid=5386 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:10.897308 kubelet[2876]: E0124 12:02:10.897220 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-576w8" podUID="3477849f-ef62-42dc-be46-c8edc5b93ccb" Jan 24 12:02:14.890623 kubelet[2876]: E0124 12:02:14.890433 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-k2xcd" podUID="f944553c-3de6-4dea-af30-6e177d6839ad" Jan 24 12:02:14.890623 kubelet[2876]: E0124 12:02:14.890435 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76997bfb4b-ggwxc" podUID="543ea964-5bd2-4a2a-be7e-5b64397ea1f6" Jan 24 12:02:15.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.100:22-10.0.0.1:60116 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:02:15.307192 systemd[1]: Started sshd@9-10.0.0.100:22-10.0.0.1:60116.service - OpenSSH per-connection server daemon (10.0.0.1:60116). Jan 24 12:02:15.316769 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 12:02:15.316854 kernel: audit: type=1130 audit(1769256135.305:747): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.100:22-10.0.0.1:60116 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:02:15.469835 sshd[5407]: Accepted publickey for core from 10.0.0.1 port 60116 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 12:02:15.467000 audit[5407]: USER_ACCT pid=5407 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:15.476869 sshd-session[5407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 12:02:15.471000 audit[5407]: CRED_ACQ pid=5407 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:15.499719 systemd-logind[1619]: New session 11 of user core. Jan 24 12:02:15.504123 kernel: audit: type=1101 audit(1769256135.467:748): pid=5407 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:15.504228 kernel: audit: type=1103 audit(1769256135.471:749): pid=5407 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:15.504296 kernel: audit: type=1006 audit(1769256135.471:750): pid=5407 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 24 12:02:15.471000 audit[5407]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd0fd21220 a2=3 a3=0 items=0 ppid=1 pid=5407 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:02:15.538159 kernel: audit: type=1300 audit(1769256135.471:750): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd0fd21220 a2=3 a3=0 items=0 ppid=1 pid=5407 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:02:15.471000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:02:15.544729 kernel: audit: type=1327 audit(1769256135.471:750): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:02:15.546075 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 24 12:02:15.551000 audit[5407]: USER_START pid=5407 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:15.559000 audit[5411]: CRED_ACQ pid=5411 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:15.607097 kernel: audit: type=1105 audit(1769256135.551:751): pid=5407 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:15.607273 kernel: audit: type=1103 audit(1769256135.559:752): pid=5411 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:15.905798 sshd[5411]: Connection closed by 10.0.0.1 port 60116 Jan 24 12:02:15.907079 sshd-session[5407]: pam_unix(sshd:session): session closed for user core Jan 24 12:02:15.916000 audit[5407]: USER_END pid=5407 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:15.928926 systemd[1]: sshd@9-10.0.0.100:22-10.0.0.1:60116.service: Deactivated successfully. Jan 24 12:02:15.937384 systemd[1]: session-11.scope: Deactivated successfully. Jan 24 12:02:15.951375 systemd-logind[1619]: Session 11 logged out. Waiting for processes to exit. Jan 24 12:02:15.961418 kernel: audit: type=1106 audit(1769256135.916:753): pid=5407 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:15.962284 systemd-logind[1619]: Removed session 11. Jan 24 12:02:15.916000 audit[5407]: CRED_DISP pid=5407 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:15.988104 kernel: audit: type=1104 audit(1769256135.916:754): pid=5407 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:15.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.100:22-10.0.0.1:60116 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:02:17.900835 kubelet[2876]: E0124 12:02:17.895310 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-869c797fbb-hltgx" podUID="ec18082a-49e1-4173-9b94-153e655a0861" Jan 24 12:02:20.906270 kubelet[2876]: E0124 12:02:20.905033 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d9dddf448-n9r2d" podUID="dd704f6f-5a9f-42a8-93d9-5d24176bfd82" Jan 24 12:02:20.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.100:22-10.0.0.1:60124 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:02:20.993879 systemd[1]: Started sshd@10-10.0.0.100:22-10.0.0.1:60124.service - OpenSSH per-connection server daemon (10.0.0.1:60124). Jan 24 12:02:20.998847 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 12:02:20.998979 kernel: audit: type=1130 audit(1769256140.993:756): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.100:22-10.0.0.1:60124 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:02:21.244000 audit[5426]: USER_ACCT pid=5426 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:21.246731 sshd[5426]: Accepted publickey for core from 10.0.0.1 port 60124 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 12:02:21.253000 audit[5426]: CRED_ACQ pid=5426 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:21.256002 sshd-session[5426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 12:02:21.267978 kernel: audit: type=1101 audit(1769256141.244:757): pid=5426 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:21.268694 kernel: audit: type=1103 audit(1769256141.253:758): pid=5426 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:21.268753 kernel: audit: type=1006 audit(1769256141.253:759): pid=5426 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 24 12:02:21.275191 kernel: audit: type=1300 audit(1769256141.253:759): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd9c913f0 a2=3 a3=0 items=0 ppid=1 pid=5426 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:02:21.253000 audit[5426]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd9c913f0 a2=3 a3=0 items=0 ppid=1 pid=5426 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:02:21.284400 systemd-logind[1619]: New session 12 of user core. Jan 24 12:02:21.291931 kernel: audit: type=1327 audit(1769256141.253:759): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:02:21.253000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:02:21.312179 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 24 12:02:21.345000 audit[5426]: USER_START pid=5426 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:21.392683 kernel: audit: type=1105 audit(1769256141.345:760): pid=5426 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:21.392827 kernel: audit: type=1103 audit(1769256141.359:761): pid=5430 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:21.359000 audit[5430]: CRED_ACQ pid=5430 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:21.627633 sshd[5430]: Connection closed by 10.0.0.1 port 60124 Jan 24 12:02:21.630010 sshd-session[5426]: pam_unix(sshd:session): session closed for user core Jan 24 12:02:21.637000 audit[5426]: USER_END pid=5426 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:21.668640 kernel: audit: type=1106 audit(1769256141.637:762): pid=5426 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:21.668890 kernel: audit: type=1104 audit(1769256141.637:763): pid=5426 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:21.637000 audit[5426]: CRED_DISP pid=5426 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:21.664202 systemd[1]: sshd@10-10.0.0.100:22-10.0.0.1:60124.service: Deactivated successfully. Jan 24 12:02:21.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.100:22-10.0.0.1:60124 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:02:21.676517 systemd[1]: session-12.scope: Deactivated successfully. Jan 24 12:02:21.681442 systemd-logind[1619]: Session 12 logged out. Waiting for processes to exit. Jan 24 12:02:21.690241 systemd-logind[1619]: Removed session 12. Jan 24 12:02:21.900726 kubelet[2876]: E0124 12:02:21.900099 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76997bfb4b-55f79" podUID="7e54efbd-6a62-4db3-8b3c-99aa330f72d1" Jan 24 12:02:23.901383 kubelet[2876]: E0124 12:02:23.901236 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-576w8" podUID="3477849f-ef62-42dc-be46-c8edc5b93ccb" Jan 24 12:02:25.891263 kubelet[2876]: E0124 12:02:25.891102 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:02:25.896015 kubelet[2876]: E0124 12:02:25.895829 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76997bfb4b-ggwxc" podUID="543ea964-5bd2-4a2a-be7e-5b64397ea1f6" Jan 24 12:02:26.663873 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 12:02:26.664069 kernel: audit: type=1130 audit(1769256146.652:765): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.100:22-10.0.0.1:32918 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:02:26.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.100:22-10.0.0.1:32918 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:02:26.652939 systemd[1]: Started sshd@11-10.0.0.100:22-10.0.0.1:32918.service - OpenSSH per-connection server daemon (10.0.0.1:32918). Jan 24 12:02:26.860000 audit[5448]: USER_ACCT pid=5448 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:26.866086 sshd[5448]: Accepted publickey for core from 10.0.0.1 port 32918 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 12:02:26.869620 sshd-session[5448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 12:02:26.866000 audit[5448]: CRED_ACQ pid=5448 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:26.887452 systemd-logind[1619]: New session 13 of user core. Jan 24 12:02:26.903293 kernel: audit: type=1101 audit(1769256146.860:766): pid=5448 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:26.903410 kernel: audit: type=1103 audit(1769256146.866:767): pid=5448 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:26.903467 kernel: audit: type=1006 audit(1769256146.867:768): pid=5448 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 24 12:02:26.867000 audit[5448]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffebad81010 a2=3 a3=0 items=0 ppid=1 pid=5448 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:02:26.947465 kernel: audit: type=1300 audit(1769256146.867:768): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffebad81010 a2=3 a3=0 items=0 ppid=1 pid=5448 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:02:26.950991 kernel: audit: type=1327 audit(1769256146.867:768): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:02:26.867000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:02:26.957374 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 24 12:02:26.983000 audit[5448]: USER_START pid=5448 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:27.033334 kernel: audit: type=1105 audit(1769256146.983:769): pid=5448 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:27.033470 kernel: audit: type=1103 audit(1769256146.991:770): pid=5452 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:26.991000 audit[5452]: CRED_ACQ pid=5452 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:27.334049 sshd[5452]: Connection closed by 10.0.0.1 port 32918 Jan 24 12:02:27.334948 sshd-session[5448]: pam_unix(sshd:session): session closed for user core Jan 24 12:02:27.338000 audit[5448]: USER_END pid=5448 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:27.343937 systemd-logind[1619]: Session 13 logged out. Waiting for processes to exit. Jan 24 12:02:27.350045 systemd[1]: sshd@11-10.0.0.100:22-10.0.0.1:32918.service: Deactivated successfully. Jan 24 12:02:27.357636 systemd[1]: session-13.scope: Deactivated successfully. Jan 24 12:02:27.358809 kernel: audit: type=1106 audit(1769256147.338:771): pid=5448 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:27.339000 audit[5448]: CRED_DISP pid=5448 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:27.366720 systemd-logind[1619]: Removed session 13. Jan 24 12:02:27.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.100:22-10.0.0.1:32918 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:02:27.381017 kernel: audit: type=1104 audit(1769256147.339:772): pid=5448 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:27.892487 kubelet[2876]: E0124 12:02:27.891993 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:02:29.890922 kubelet[2876]: E0124 12:02:29.890850 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-k2xcd" podUID="f944553c-3de6-4dea-af30-6e177d6839ad" Jan 24 12:02:29.891922 containerd[1652]: time="2026-01-24T12:02:29.891771947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 12:02:30.030032 containerd[1652]: time="2026-01-24T12:02:30.029872334Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:02:30.036297 containerd[1652]: time="2026-01-24T12:02:30.035978076Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 12:02:30.036297 containerd[1652]: time="2026-01-24T12:02:30.036147927Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 24 12:02:30.036770 kubelet[2876]: E0124 12:02:30.036678 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 12:02:30.036863 kubelet[2876]: E0124 12:02:30.036772 2876 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 12:02:30.036915 kubelet[2876]: E0124 12:02:30.036886 2876 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-869c797fbb-hltgx_calico-system(ec18082a-49e1-4173-9b94-153e655a0861): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 12:02:30.041215 containerd[1652]: time="2026-01-24T12:02:30.041115310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 12:02:30.108160 containerd[1652]: time="2026-01-24T12:02:30.107948660Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:02:30.125704 containerd[1652]: time="2026-01-24T12:02:30.115339485Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 12:02:30.125704 containerd[1652]: time="2026-01-24T12:02:30.115458283Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 24 12:02:30.126295 kubelet[2876]: E0124 12:02:30.126184 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 12:02:30.126295 kubelet[2876]: E0124 12:02:30.126266 2876 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 12:02:30.126753 kubelet[2876]: E0124 12:02:30.126381 2876 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-869c797fbb-hltgx_calico-system(ec18082a-49e1-4173-9b94-153e655a0861): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 12:02:30.126753 kubelet[2876]: E0124 12:02:30.126446 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-869c797fbb-hltgx" podUID="ec18082a-49e1-4173-9b94-153e655a0861" Jan 24 12:02:32.373825 systemd[1]: Started sshd@12-10.0.0.100:22-10.0.0.1:39760.service - OpenSSH per-connection server daemon (10.0.0.1:39760). Jan 24 12:02:32.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.100:22-10.0.0.1:39760 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:02:32.382361 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 12:02:32.383593 kernel: audit: type=1130 audit(1769256152.372:774): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.100:22-10.0.0.1:39760 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:02:32.549465 sshd[5468]: Accepted publickey for core from 10.0.0.1 port 39760 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 12:02:32.547000 audit[5468]: USER_ACCT pid=5468 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:32.555185 sshd-session[5468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 12:02:32.551000 audit[5468]: CRED_ACQ pid=5468 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:32.572468 kernel: audit: type=1101 audit(1769256152.547:775): pid=5468 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:32.572621 kernel: audit: type=1103 audit(1769256152.551:776): pid=5468 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:32.572694 kernel: audit: type=1006 audit(1769256152.551:777): pid=5468 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 24 12:02:32.572841 systemd-logind[1619]: New session 14 of user core. Jan 24 12:02:32.551000 audit[5468]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff024e7220 a2=3 a3=0 items=0 ppid=1 pid=5468 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:02:32.590802 kernel: audit: type=1300 audit(1769256152.551:777): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff024e7220 a2=3 a3=0 items=0 ppid=1 pid=5468 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:02:32.590913 kernel: audit: type=1327 audit(1769256152.551:777): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:02:32.551000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:02:32.594448 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 24 12:02:32.602000 audit[5468]: USER_START pid=5468 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:32.610000 audit[5472]: CRED_ACQ pid=5472 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:32.647384 kernel: audit: type=1105 audit(1769256152.602:778): pid=5468 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:32.647652 kernel: audit: type=1103 audit(1769256152.610:779): pid=5472 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:32.810120 sshd[5472]: Connection closed by 10.0.0.1 port 39760 Jan 24 12:02:32.810736 sshd-session[5468]: pam_unix(sshd:session): session closed for user core Jan 24 12:02:32.812000 audit[5468]: USER_END pid=5468 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:32.821832 systemd[1]: sshd@12-10.0.0.100:22-10.0.0.1:39760.service: Deactivated successfully. Jan 24 12:02:32.826276 systemd[1]: session-14.scope: Deactivated successfully. Jan 24 12:02:32.828854 systemd-logind[1619]: Session 14 logged out. Waiting for processes to exit. Jan 24 12:02:32.831617 kernel: audit: type=1106 audit(1769256152.812:780): pid=5468 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:32.831714 kernel: audit: type=1104 audit(1769256152.812:781): pid=5468 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:32.812000 audit[5468]: CRED_DISP pid=5468 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:32.838252 systemd-logind[1619]: Removed session 14. Jan 24 12:02:32.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.100:22-10.0.0.1:39760 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:02:33.893240 kubelet[2876]: E0124 12:02:33.893152 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76997bfb4b-55f79" podUID="7e54efbd-6a62-4db3-8b3c-99aa330f72d1" Jan 24 12:02:33.894783 kubelet[2876]: E0124 12:02:33.894203 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d9dddf448-n9r2d" podUID="dd704f6f-5a9f-42a8-93d9-5d24176bfd82" Jan 24 12:02:34.888510 kubelet[2876]: E0124 12:02:34.888382 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:02:34.888510 kubelet[2876]: E0124 12:02:34.888504 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:02:35.891634 kubelet[2876]: E0124 12:02:35.891261 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-576w8" podUID="3477849f-ef62-42dc-be46-c8edc5b93ccb" Jan 24 12:02:36.890937 containerd[1652]: time="2026-01-24T12:02:36.890803970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 12:02:36.954038 containerd[1652]: time="2026-01-24T12:02:36.953950872Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:02:36.956495 containerd[1652]: time="2026-01-24T12:02:36.956299255Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 12:02:36.957689 kubelet[2876]: E0124 12:02:36.957050 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 12:02:36.957689 kubelet[2876]: E0124 12:02:36.957122 2876 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 12:02:36.957689 kubelet[2876]: E0124 12:02:36.957218 2876 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-76997bfb4b-ggwxc_calico-apiserver(543ea964-5bd2-4a2a-be7e-5b64397ea1f6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 12:02:36.957689 kubelet[2876]: E0124 12:02:36.957251 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76997bfb4b-ggwxc" podUID="543ea964-5bd2-4a2a-be7e-5b64397ea1f6" Jan 24 12:02:36.962420 containerd[1652]: time="2026-01-24T12:02:36.956385911Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 12:02:37.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.100:22-10.0.0.1:39766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:02:37.839308 systemd[1]: Started sshd@13-10.0.0.100:22-10.0.0.1:39766.service - OpenSSH per-connection server daemon (10.0.0.1:39766). Jan 24 12:02:37.854893 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 12:02:37.855033 kernel: audit: type=1130 audit(1769256157.838:783): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.100:22-10.0.0.1:39766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:02:37.943000 audit[5492]: USER_ACCT pid=5492 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:37.961720 kernel: audit: type=1101 audit(1769256157.943:784): pid=5492 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:37.949928 sshd-session[5492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 12:02:37.962276 sshd[5492]: Accepted publickey for core from 10.0.0.1 port 39766 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 12:02:37.946000 audit[5492]: CRED_ACQ pid=5492 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:37.979135 systemd-logind[1619]: New session 15 of user core. Jan 24 12:02:37.992466 kernel: audit: type=1103 audit(1769256157.946:785): pid=5492 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:37.992644 kernel: audit: type=1006 audit(1769256157.946:786): pid=5492 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 24 12:02:37.946000 audit[5492]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcdabf8750 a2=3 a3=0 items=0 ppid=1 pid=5492 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:02:38.004636 kernel: audit: type=1300 audit(1769256157.946:786): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcdabf8750 a2=3 a3=0 items=0 ppid=1 pid=5492 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:02:38.005008 kernel: audit: type=1327 audit(1769256157.946:786): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:02:37.946000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:02:38.010455 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 24 12:02:38.016000 audit[5492]: USER_START pid=5492 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:38.033000 audit[5496]: CRED_ACQ pid=5496 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:38.051147 kernel: audit: type=1105 audit(1769256158.016:787): pid=5492 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:38.051258 kernel: audit: type=1103 audit(1769256158.033:788): pid=5496 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:38.240721 sshd[5496]: Connection closed by 10.0.0.1 port 39766 Jan 24 12:02:38.241998 sshd-session[5492]: pam_unix(sshd:session): session closed for user core Jan 24 12:02:38.246000 audit[5492]: USER_END pid=5492 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:38.253052 systemd[1]: sshd@13-10.0.0.100:22-10.0.0.1:39766.service: Deactivated successfully. Jan 24 12:02:38.257604 systemd[1]: session-15.scope: Deactivated successfully. Jan 24 12:02:38.259998 systemd-logind[1619]: Session 15 logged out. Waiting for processes to exit. Jan 24 12:02:38.262847 kernel: audit: type=1106 audit(1769256158.246:789): pid=5492 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:38.263627 systemd[1]: Started sshd@14-10.0.0.100:22-10.0.0.1:39772.service - OpenSSH per-connection server daemon (10.0.0.1:39772). Jan 24 12:02:38.267888 systemd-logind[1619]: Removed session 15. Jan 24 12:02:38.247000 audit[5492]: CRED_DISP pid=5492 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:38.283975 kernel: audit: type=1104 audit(1769256158.247:790): pid=5492 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:38.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.100:22-10.0.0.1:39766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:02:38.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.100:22-10.0.0.1:39772 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:02:38.368000 audit[5510]: USER_ACCT pid=5510 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:38.371007 sshd[5510]: Accepted publickey for core from 10.0.0.1 port 39772 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 12:02:38.374140 sshd-session[5510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 12:02:38.370000 audit[5510]: CRED_ACQ pid=5510 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:38.370000 audit[5510]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc02e95c30 a2=3 a3=0 items=0 ppid=1 pid=5510 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:02:38.370000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:02:38.389495 systemd-logind[1619]: New session 16 of user core. Jan 24 12:02:38.403979 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 24 12:02:38.421000 audit[5510]: USER_START pid=5510 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:38.427000 audit[5514]: CRED_ACQ pid=5514 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:38.721175 sshd[5514]: Connection closed by 10.0.0.1 port 39772 Jan 24 12:02:38.725907 sshd-session[5510]: pam_unix(sshd:session): session closed for user core Jan 24 12:02:38.729000 audit[5510]: USER_END pid=5510 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:38.729000 audit[5510]: CRED_DISP pid=5510 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:38.741455 systemd[1]: sshd@14-10.0.0.100:22-10.0.0.1:39772.service: Deactivated successfully. Jan 24 12:02:38.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.100:22-10.0.0.1:39772 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:02:38.750897 systemd[1]: session-16.scope: Deactivated successfully. Jan 24 12:02:38.752642 systemd-logind[1619]: Session 16 logged out. Waiting for processes to exit. Jan 24 12:02:38.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.100:22-10.0.0.1:39786 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:02:38.764030 systemd[1]: Started sshd@15-10.0.0.100:22-10.0.0.1:39786.service - OpenSSH per-connection server daemon (10.0.0.1:39786). Jan 24 12:02:38.767953 systemd-logind[1619]: Removed session 16. Jan 24 12:02:38.864000 audit[5526]: USER_ACCT pid=5526 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:38.868699 sshd[5526]: Accepted publickey for core from 10.0.0.1 port 39786 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 12:02:38.867000 audit[5526]: CRED_ACQ pid=5526 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:38.867000 audit[5526]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec604d8c0 a2=3 a3=0 items=0 ppid=1 pid=5526 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:02:38.867000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:02:38.871040 sshd-session[5526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 12:02:38.882655 systemd-logind[1619]: New session 17 of user core. Jan 24 12:02:38.895102 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 24 12:02:38.901000 audit[5526]: USER_START pid=5526 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:38.904000 audit[5530]: CRED_ACQ pid=5530 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:39.052749 sshd[5530]: Connection closed by 10.0.0.1 port 39786 Jan 24 12:02:39.051000 audit[5526]: USER_END pid=5526 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:39.051000 audit[5526]: CRED_DISP pid=5526 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:39.050485 sshd-session[5526]: pam_unix(sshd:session): session closed for user core Jan 24 12:02:39.058914 systemd[1]: sshd@15-10.0.0.100:22-10.0.0.1:39786.service: Deactivated successfully. Jan 24 12:02:39.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.100:22-10.0.0.1:39786 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:02:39.064655 systemd[1]: session-17.scope: Deactivated successfully. Jan 24 12:02:39.066639 systemd-logind[1619]: Session 17 logged out. Waiting for processes to exit. Jan 24 12:02:39.069471 systemd-logind[1619]: Removed session 17. Jan 24 12:02:39.889812 kubelet[2876]: E0124 12:02:39.889698 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:02:40.891112 kubelet[2876]: E0124 12:02:40.891052 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-869c797fbb-hltgx" podUID="ec18082a-49e1-4173-9b94-153e655a0861" Jan 24 12:02:42.888391 kubelet[2876]: E0124 12:02:42.888307 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:02:43.890780 containerd[1652]: time="2026-01-24T12:02:43.890702459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 12:02:43.960431 containerd[1652]: time="2026-01-24T12:02:43.960255407Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:02:43.962602 containerd[1652]: time="2026-01-24T12:02:43.962355356Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 12:02:43.962602 containerd[1652]: time="2026-01-24T12:02:43.962478352Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 24 12:02:43.964025 kubelet[2876]: E0124 12:02:43.963929 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 12:02:43.964514 kubelet[2876]: E0124 12:02:43.964027 2876 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 12:02:43.964514 kubelet[2876]: E0124 12:02:43.964192 2876 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-k2xcd_calico-system(f944553c-3de6-4dea-af30-6e177d6839ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 12:02:43.964514 kubelet[2876]: E0124 12:02:43.964254 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-k2xcd" podUID="f944553c-3de6-4dea-af30-6e177d6839ad" Jan 24 12:02:44.077135 systemd[1]: Started sshd@16-10.0.0.100:22-10.0.0.1:50770.service - OpenSSH per-connection server daemon (10.0.0.1:50770). Jan 24 12:02:44.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.100:22-10.0.0.1:50770 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:02:44.090443 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 24 12:02:44.090629 kernel: audit: type=1130 audit(1769256164.075:810): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.100:22-10.0.0.1:50770 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:02:44.176000 audit[5571]: USER_ACCT pid=5571 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:44.182282 sshd-session[5571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 12:02:44.187291 sshd[5571]: Accepted publickey for core from 10.0.0.1 port 50770 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 12:02:44.199402 kernel: audit: type=1101 audit(1769256164.176:811): pid=5571 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:44.199639 kernel: audit: type=1103 audit(1769256164.179:812): pid=5571 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:44.179000 audit[5571]: CRED_ACQ pid=5571 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:44.197666 systemd-logind[1619]: New session 18 of user core. Jan 24 12:02:44.179000 audit[5571]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe10b74b20 a2=3 a3=0 items=0 ppid=1 pid=5571 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:02:44.223085 kernel: audit: type=1006 audit(1769256164.179:813): pid=5571 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 24 12:02:44.223838 kernel: audit: type=1300 audit(1769256164.179:813): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe10b74b20 a2=3 a3=0 items=0 ppid=1 pid=5571 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:02:44.227314 kernel: audit: type=1327 audit(1769256164.179:813): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:02:44.179000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:02:44.224077 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 24 12:02:44.232000 audit[5571]: USER_START pid=5571 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:44.260928 kernel: audit: type=1105 audit(1769256164.232:814): pid=5571 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:44.261069 kernel: audit: type=1103 audit(1769256164.235:815): pid=5575 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:44.235000 audit[5575]: CRED_ACQ pid=5575 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:44.375640 sshd[5575]: Connection closed by 10.0.0.1 port 50770 Jan 24 12:02:44.376819 sshd-session[5571]: pam_unix(sshd:session): session closed for user core Jan 24 12:02:44.379000 audit[5571]: USER_END pid=5571 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:44.388089 systemd[1]: sshd@16-10.0.0.100:22-10.0.0.1:50770.service: Deactivated successfully. Jan 24 12:02:44.394219 systemd[1]: session-18.scope: Deactivated successfully. Jan 24 12:02:44.401307 kernel: audit: type=1106 audit(1769256164.379:816): pid=5571 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:44.379000 audit[5571]: CRED_DISP pid=5571 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:44.405594 systemd-logind[1619]: Session 18 logged out. Waiting for processes to exit. Jan 24 12:02:44.411402 systemd-logind[1619]: Removed session 18. Jan 24 12:02:44.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.100:22-10.0.0.1:50770 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:02:44.418723 kernel: audit: type=1104 audit(1769256164.379:817): pid=5571 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:45.892482 containerd[1652]: time="2026-01-24T12:02:45.891467171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 12:02:45.964283 containerd[1652]: time="2026-01-24T12:02:45.964078511Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:02:45.968059 containerd[1652]: time="2026-01-24T12:02:45.967898126Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 12:02:45.968059 containerd[1652]: time="2026-01-24T12:02:45.967960023Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 24 12:02:45.968306 kubelet[2876]: E0124 12:02:45.968263 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 12:02:45.969631 kubelet[2876]: E0124 12:02:45.968323 2876 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 12:02:45.969631 kubelet[2876]: E0124 12:02:45.968417 2876 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5d9dddf448-n9r2d_calico-system(dd704f6f-5a9f-42a8-93d9-5d24176bfd82): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 12:02:45.969631 kubelet[2876]: E0124 12:02:45.968458 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d9dddf448-n9r2d" podUID="dd704f6f-5a9f-42a8-93d9-5d24176bfd82" Jan 24 12:02:46.889983 containerd[1652]: time="2026-01-24T12:02:46.889914635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 12:02:46.951415 containerd[1652]: time="2026-01-24T12:02:46.951324924Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:02:46.953933 containerd[1652]: time="2026-01-24T12:02:46.953832495Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 12:02:46.954391 containerd[1652]: time="2026-01-24T12:02:46.953920215Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 12:02:46.954933 kubelet[2876]: E0124 12:02:46.954833 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 12:02:46.954933 kubelet[2876]: E0124 12:02:46.954929 2876 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 12:02:46.955087 kubelet[2876]: E0124 12:02:46.955011 2876 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-76997bfb4b-55f79_calico-apiserver(7e54efbd-6a62-4db3-8b3c-99aa330f72d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 12:02:46.955087 kubelet[2876]: E0124 12:02:46.955046 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76997bfb4b-55f79" podUID="7e54efbd-6a62-4db3-8b3c-99aa330f72d1" Jan 24 12:02:48.898765 kubelet[2876]: E0124 12:02:48.896966 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76997bfb4b-ggwxc" podUID="543ea964-5bd2-4a2a-be7e-5b64397ea1f6" Jan 24 12:02:49.403307 systemd[1]: Started sshd@17-10.0.0.100:22-10.0.0.1:50776.service - OpenSSH per-connection server daemon (10.0.0.1:50776). Jan 24 12:02:49.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.100:22-10.0.0.1:50776 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:02:49.406209 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 12:02:49.406289 kernel: audit: type=1130 audit(1769256169.402:819): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.100:22-10.0.0.1:50776 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:02:49.520000 audit[5605]: USER_ACCT pid=5605 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:49.521654 sshd[5605]: Accepted publickey for core from 10.0.0.1 port 50776 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 12:02:49.527651 sshd-session[5605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 12:02:49.549064 kernel: audit: type=1101 audit(1769256169.520:820): pid=5605 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:49.549193 kernel: audit: type=1103 audit(1769256169.520:821): pid=5605 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:49.520000 audit[5605]: CRED_ACQ pid=5605 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:49.540652 systemd-logind[1619]: New session 19 of user core. Jan 24 12:02:49.556656 kernel: audit: type=1006 audit(1769256169.520:822): pid=5605 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 24 12:02:49.520000 audit[5605]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe60f43360 a2=3 a3=0 items=0 ppid=1 pid=5605 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:02:49.556969 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 24 12:02:49.570646 kernel: audit: type=1300 audit(1769256169.520:822): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe60f43360 a2=3 a3=0 items=0 ppid=1 pid=5605 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:02:49.520000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:02:49.575877 kernel: audit: type=1327 audit(1769256169.520:822): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:02:49.567000 audit[5605]: USER_START pid=5605 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:49.607820 kernel: audit: type=1105 audit(1769256169.567:823): pid=5605 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:49.607949 kernel: audit: type=1103 audit(1769256169.570:824): pid=5609 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:49.570000 audit[5609]: CRED_ACQ pid=5609 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:49.746827 sshd[5609]: Connection closed by 10.0.0.1 port 50776 Jan 24 12:02:49.747228 sshd-session[5605]: pam_unix(sshd:session): session closed for user core Jan 24 12:02:49.751000 audit[5605]: USER_END pid=5605 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:49.757231 systemd-logind[1619]: Session 19 logged out. Waiting for processes to exit. Jan 24 12:02:49.757361 systemd[1]: sshd@17-10.0.0.100:22-10.0.0.1:50776.service: Deactivated successfully. Jan 24 12:02:49.762244 systemd[1]: session-19.scope: Deactivated successfully. Jan 24 12:02:49.752000 audit[5605]: CRED_DISP pid=5605 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:49.767090 systemd-logind[1619]: Removed session 19. Jan 24 12:02:49.774277 kernel: audit: type=1106 audit(1769256169.751:825): pid=5605 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:49.774333 kernel: audit: type=1104 audit(1769256169.752:826): pid=5605 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:49.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.100:22-10.0.0.1:50776 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:02:50.892687 containerd[1652]: time="2026-01-24T12:02:50.890143233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 12:02:50.957528 containerd[1652]: time="2026-01-24T12:02:50.956926173Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:02:50.961120 containerd[1652]: time="2026-01-24T12:02:50.961071570Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 12:02:50.961659 containerd[1652]: time="2026-01-24T12:02:50.961184218Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 24 12:02:50.962059 kubelet[2876]: E0124 12:02:50.961936 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 12:02:50.962059 kubelet[2876]: E0124 12:02:50.962004 2876 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 12:02:50.962723 kubelet[2876]: E0124 12:02:50.962134 2876 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-576w8_calico-system(3477849f-ef62-42dc-be46-c8edc5b93ccb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 12:02:50.965615 containerd[1652]: time="2026-01-24T12:02:50.965452182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 12:02:51.041698 containerd[1652]: time="2026-01-24T12:02:51.041469130Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 12:02:51.045179 containerd[1652]: time="2026-01-24T12:02:51.045051660Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 12:02:51.045470 containerd[1652]: time="2026-01-24T12:02:51.045162406Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 24 12:02:51.046656 kubelet[2876]: E0124 12:02:51.046280 2876 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 12:02:51.046656 kubelet[2876]: E0124 12:02:51.046354 2876 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 12:02:51.046656 kubelet[2876]: E0124 12:02:51.046461 2876 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-576w8_calico-system(3477849f-ef62-42dc-be46-c8edc5b93ccb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 12:02:51.046656 kubelet[2876]: E0124 12:02:51.046518 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-576w8" podUID="3477849f-ef62-42dc-be46-c8edc5b93ccb" Jan 24 12:02:54.773744 systemd[1]: Started sshd@18-10.0.0.100:22-10.0.0.1:52782.service - OpenSSH per-connection server daemon (10.0.0.1:52782). Jan 24 12:02:54.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.100:22-10.0.0.1:52782 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:02:54.779139 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 12:02:54.779199 kernel: audit: type=1130 audit(1769256174.773:828): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.100:22-10.0.0.1:52782 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:02:54.887000 audit[5632]: USER_ACCT pid=5632 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:54.888828 sshd[5632]: Accepted publickey for core from 10.0.0.1 port 52782 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 12:02:54.892893 sshd-session[5632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 12:02:54.896232 kubelet[2876]: E0124 12:02:54.895240 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:02:54.910370 kernel: audit: type=1101 audit(1769256174.887:829): pid=5632 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:54.910529 kernel: audit: type=1103 audit(1769256174.890:830): pid=5632 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:54.890000 audit[5632]: CRED_ACQ pid=5632 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:54.924274 systemd-logind[1619]: New session 20 of user core. Jan 24 12:02:54.928206 kernel: audit: type=1006 audit(1769256174.890:831): pid=5632 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 24 12:02:54.890000 audit[5632]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd5052a860 a2=3 a3=0 items=0 ppid=1 pid=5632 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:02:54.950019 kernel: audit: type=1300 audit(1769256174.890:831): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd5052a860 a2=3 a3=0 items=0 ppid=1 pid=5632 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:02:54.953988 kernel: audit: type=1327 audit(1769256174.890:831): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:02:54.890000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:02:54.959014 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 24 12:02:54.968000 audit[5632]: USER_START pid=5632 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:54.985616 kernel: audit: type=1105 audit(1769256174.968:832): pid=5632 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:54.973000 audit[5636]: CRED_ACQ pid=5636 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:54.998714 kernel: audit: type=1103 audit(1769256174.973:833): pid=5636 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:55.126413 sshd[5636]: Connection closed by 10.0.0.1 port 52782 Jan 24 12:02:55.127839 sshd-session[5632]: pam_unix(sshd:session): session closed for user core Jan 24 12:02:55.131000 audit[5632]: USER_END pid=5632 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:55.137389 systemd[1]: sshd@18-10.0.0.100:22-10.0.0.1:52782.service: Deactivated successfully. Jan 24 12:02:55.137866 systemd-logind[1619]: Session 20 logged out. Waiting for processes to exit. Jan 24 12:02:55.145066 systemd[1]: session-20.scope: Deactivated successfully. Jan 24 12:02:55.153861 kernel: audit: type=1106 audit(1769256175.131:834): pid=5632 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:55.131000 audit[5632]: CRED_DISP pid=5632 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:55.157766 systemd-logind[1619]: Removed session 20. Jan 24 12:02:55.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.100:22-10.0.0.1:52782 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:02:55.172359 kernel: audit: type=1104 audit(1769256175.131:835): pid=5632 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:02:55.894419 kubelet[2876]: E0124 12:02:55.894354 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-k2xcd" podUID="f944553c-3de6-4dea-af30-6e177d6839ad" Jan 24 12:02:55.904135 kubelet[2876]: E0124 12:02:55.904051 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-869c797fbb-hltgx" podUID="ec18082a-49e1-4173-9b94-153e655a0861" Jan 24 12:02:57.892227 kubelet[2876]: E0124 12:02:57.892073 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76997bfb4b-55f79" podUID="7e54efbd-6a62-4db3-8b3c-99aa330f72d1" Jan 24 12:02:57.893754 kubelet[2876]: E0124 12:02:57.893667 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d9dddf448-n9r2d" podUID="dd704f6f-5a9f-42a8-93d9-5d24176bfd82" Jan 24 12:02:59.893996 kubelet[2876]: E0124 12:02:59.893855 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76997bfb4b-ggwxc" podUID="543ea964-5bd2-4a2a-be7e-5b64397ea1f6" Jan 24 12:03:00.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.100:22-10.0.0.1:52790 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:03:00.151025 systemd[1]: Started sshd@19-10.0.0.100:22-10.0.0.1:52790.service - OpenSSH per-connection server daemon (10.0.0.1:52790). Jan 24 12:03:00.160630 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 12:03:00.160695 kernel: audit: type=1130 audit(1769256180.150:837): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.100:22-10.0.0.1:52790 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:03:00.304000 audit[5650]: USER_ACCT pid=5650 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:00.313105 sshd[5650]: Accepted publickey for core from 10.0.0.1 port 52790 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 12:03:00.317049 sshd-session[5650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 12:03:00.309000 audit[5650]: CRED_ACQ pid=5650 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:00.334084 systemd-logind[1619]: New session 21 of user core. Jan 24 12:03:00.341654 kernel: audit: type=1101 audit(1769256180.304:838): pid=5650 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:00.341726 kernel: audit: type=1103 audit(1769256180.309:839): pid=5650 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:00.349801 kernel: audit: type=1006 audit(1769256180.313:840): pid=5650 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 24 12:03:00.313000 audit[5650]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff4487b4b0 a2=3 a3=0 items=0 ppid=1 pid=5650 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:03:00.366626 kernel: audit: type=1300 audit(1769256180.313:840): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff4487b4b0 a2=3 a3=0 items=0 ppid=1 pid=5650 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:03:00.366776 kernel: audit: type=1327 audit(1769256180.313:840): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:03:00.313000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:03:00.369126 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 24 12:03:00.379000 audit[5650]: USER_START pid=5650 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:00.396659 kernel: audit: type=1105 audit(1769256180.379:841): pid=5650 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:00.386000 audit[5654]: CRED_ACQ pid=5654 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:00.420609 kernel: audit: type=1103 audit(1769256180.386:842): pid=5654 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:00.572932 sshd[5654]: Connection closed by 10.0.0.1 port 52790 Jan 24 12:03:00.572100 sshd-session[5650]: pam_unix(sshd:session): session closed for user core Jan 24 12:03:00.576000 audit[5650]: USER_END pid=5650 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:00.588701 systemd[1]: sshd@19-10.0.0.100:22-10.0.0.1:52790.service: Deactivated successfully. Jan 24 12:03:00.593174 kernel: audit: type=1106 audit(1769256180.576:843): pid=5650 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:00.576000 audit[5650]: CRED_DISP pid=5650 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:00.593720 systemd[1]: session-21.scope: Deactivated successfully. Jan 24 12:03:00.595860 systemd-logind[1619]: Session 21 logged out. Waiting for processes to exit. Jan 24 12:03:00.599341 systemd-logind[1619]: Removed session 21. Jan 24 12:03:00.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.100:22-10.0.0.1:52790 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:03:00.606650 kernel: audit: type=1104 audit(1769256180.576:844): pid=5650 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:03.905287 kubelet[2876]: E0124 12:03:03.905119 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-576w8" podUID="3477849f-ef62-42dc-be46-c8edc5b93ccb" Jan 24 12:03:05.601218 systemd[1]: Started sshd@20-10.0.0.100:22-10.0.0.1:53588.service - OpenSSH per-connection server daemon (10.0.0.1:53588). Jan 24 12:03:05.605665 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 12:03:05.605761 kernel: audit: type=1130 audit(1769256185.600:846): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.100:22-10.0.0.1:53588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:03:05.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.100:22-10.0.0.1:53588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:03:05.699000 audit[5670]: USER_ACCT pid=5670 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:05.704964 sshd-session[5670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 12:03:05.707285 sshd[5670]: Accepted publickey for core from 10.0.0.1 port 53588 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 12:03:05.702000 audit[5670]: CRED_ACQ pid=5670 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:05.726823 kernel: audit: type=1101 audit(1769256185.699:847): pid=5670 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:05.726956 kernel: audit: type=1103 audit(1769256185.702:848): pid=5670 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:05.734162 kernel: audit: type=1006 audit(1769256185.702:849): pid=5670 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 24 12:03:05.733831 systemd-logind[1619]: New session 22 of user core. Jan 24 12:03:05.702000 audit[5670]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff1ca91580 a2=3 a3=0 items=0 ppid=1 pid=5670 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:03:05.702000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:03:05.761230 kernel: audit: type=1300 audit(1769256185.702:849): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff1ca91580 a2=3 a3=0 items=0 ppid=1 pid=5670 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:03:05.761427 kernel: audit: type=1327 audit(1769256185.702:849): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:03:05.762302 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 24 12:03:05.772000 audit[5670]: USER_START pid=5670 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:05.773000 audit[5674]: CRED_ACQ pid=5674 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:05.798643 kernel: audit: type=1105 audit(1769256185.772:850): pid=5670 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:05.798704 kernel: audit: type=1103 audit(1769256185.773:851): pid=5674 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:05.999979 sshd[5674]: Connection closed by 10.0.0.1 port 53588 Jan 24 12:03:06.002848 sshd-session[5670]: pam_unix(sshd:session): session closed for user core Jan 24 12:03:06.006000 audit[5670]: USER_END pid=5670 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:06.014494 systemd[1]: sshd@20-10.0.0.100:22-10.0.0.1:53588.service: Deactivated successfully. Jan 24 12:03:06.021933 systemd[1]: session-22.scope: Deactivated successfully. Jan 24 12:03:06.006000 audit[5670]: CRED_DISP pid=5670 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:06.029682 systemd-logind[1619]: Session 22 logged out. Waiting for processes to exit. Jan 24 12:03:06.033413 systemd-logind[1619]: Removed session 22. Jan 24 12:03:06.044501 kernel: audit: type=1106 audit(1769256186.006:852): pid=5670 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:06.044688 kernel: audit: type=1104 audit(1769256186.006:853): pid=5670 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:06.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.100:22-10.0.0.1:53588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:03:07.888766 kubelet[2876]: E0124 12:03:07.888420 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:03:07.933022 kubelet[2876]: E0124 12:03:07.893415 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-k2xcd" podUID="f944553c-3de6-4dea-af30-6e177d6839ad" Jan 24 12:03:07.933022 kubelet[2876]: E0124 12:03:07.899013 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-869c797fbb-hltgx" podUID="ec18082a-49e1-4173-9b94-153e655a0861" Jan 24 12:03:08.898309 kubelet[2876]: E0124 12:03:08.898257 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d9dddf448-n9r2d" podUID="dd704f6f-5a9f-42a8-93d9-5d24176bfd82" Jan 24 12:03:11.026231 systemd[1]: Started sshd@21-10.0.0.100:22-10.0.0.1:53604.service - OpenSSH per-connection server daemon (10.0.0.1:53604). Jan 24 12:03:11.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.100:22-10.0.0.1:53604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:03:11.038179 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 12:03:11.038274 kernel: audit: type=1130 audit(1769256191.027:855): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.100:22-10.0.0.1:53604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:03:11.181000 audit[5715]: USER_ACCT pid=5715 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:11.188892 sshd[5715]: Accepted publickey for core from 10.0.0.1 port 53604 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 12:03:11.189647 sshd-session[5715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 12:03:11.209865 kernel: audit: type=1101 audit(1769256191.181:856): pid=5715 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:11.185000 audit[5715]: CRED_ACQ pid=5715 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:11.229860 systemd-logind[1619]: New session 23 of user core. Jan 24 12:03:11.232835 kernel: audit: type=1103 audit(1769256191.185:857): pid=5715 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:11.185000 audit[5715]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd9d98b1d0 a2=3 a3=0 items=0 ppid=1 pid=5715 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:03:11.271492 kernel: audit: type=1006 audit(1769256191.185:858): pid=5715 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 24 12:03:11.271644 kernel: audit: type=1300 audit(1769256191.185:858): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd9d98b1d0 a2=3 a3=0 items=0 ppid=1 pid=5715 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:03:11.271694 kernel: audit: type=1327 audit(1769256191.185:858): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:03:11.185000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:03:11.286040 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 24 12:03:11.301000 audit[5715]: USER_START pid=5715 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:11.324000 audit[5725]: CRED_ACQ pid=5725 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:11.353128 kernel: audit: type=1105 audit(1769256191.301:859): pid=5715 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:11.353260 kernel: audit: type=1103 audit(1769256191.324:860): pid=5725 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:11.602102 sshd[5725]: Connection closed by 10.0.0.1 port 53604 Jan 24 12:03:11.603365 sshd-session[5715]: pam_unix(sshd:session): session closed for user core Jan 24 12:03:11.605000 audit[5715]: USER_END pid=5715 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:11.625635 kernel: audit: type=1106 audit(1769256191.605:861): pid=5715 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:11.606000 audit[5715]: CRED_DISP pid=5715 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:11.636158 systemd[1]: sshd@21-10.0.0.100:22-10.0.0.1:53604.service: Deactivated successfully. Jan 24 12:03:11.643516 systemd[1]: session-23.scope: Deactivated successfully. Jan 24 12:03:11.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.100:22-10.0.0.1:53604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:03:11.644597 kernel: audit: type=1104 audit(1769256191.606:862): pid=5715 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:11.649856 systemd-logind[1619]: Session 23 logged out. Waiting for processes to exit. Jan 24 12:03:11.653871 systemd[1]: Started sshd@22-10.0.0.100:22-10.0.0.1:53610.service - OpenSSH per-connection server daemon (10.0.0.1:53610). Jan 24 12:03:11.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.100:22-10.0.0.1:53610 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:03:11.656443 systemd-logind[1619]: Removed session 23. Jan 24 12:03:11.777000 audit[5740]: USER_ACCT pid=5740 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:11.778602 sshd[5740]: Accepted publickey for core from 10.0.0.1 port 53610 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 12:03:11.780000 audit[5740]: CRED_ACQ pid=5740 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:11.780000 audit[5740]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff23e546d0 a2=3 a3=0 items=0 ppid=1 pid=5740 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:03:11.780000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:03:11.782363 sshd-session[5740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 12:03:11.800467 systemd-logind[1619]: New session 24 of user core. Jan 24 12:03:11.807265 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 24 12:03:11.820000 audit[5740]: USER_START pid=5740 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:11.826000 audit[5745]: CRED_ACQ pid=5745 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:11.897509 kubelet[2876]: E0124 12:03:11.897185 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76997bfb4b-55f79" podUID="7e54efbd-6a62-4db3-8b3c-99aa330f72d1" Jan 24 12:03:12.632614 sshd[5745]: Connection closed by 10.0.0.1 port 53610 Jan 24 12:03:12.633495 sshd-session[5740]: pam_unix(sshd:session): session closed for user core Jan 24 12:03:12.647000 audit[5740]: USER_END pid=5740 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:12.647000 audit[5740]: CRED_DISP pid=5740 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:12.663446 systemd[1]: sshd@22-10.0.0.100:22-10.0.0.1:53610.service: Deactivated successfully. Jan 24 12:03:12.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.100:22-10.0.0.1:53610 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:03:12.671410 systemd[1]: session-24.scope: Deactivated successfully. Jan 24 12:03:12.675700 systemd-logind[1619]: Session 24 logged out. Waiting for processes to exit. Jan 24 12:03:12.682600 systemd[1]: Started sshd@23-10.0.0.100:22-10.0.0.1:33658.service - OpenSSH per-connection server daemon (10.0.0.1:33658). Jan 24 12:03:12.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.100:22-10.0.0.1:33658 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:03:12.688295 systemd-logind[1619]: Removed session 24. Jan 24 12:03:12.889173 kubelet[2876]: E0124 12:03:12.889008 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76997bfb4b-ggwxc" podUID="543ea964-5bd2-4a2a-be7e-5b64397ea1f6" Jan 24 12:03:12.928000 audit[5756]: USER_ACCT pid=5756 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:12.931762 sshd[5756]: Accepted publickey for core from 10.0.0.1 port 33658 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 12:03:12.940000 audit[5756]: CRED_ACQ pid=5756 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:12.940000 audit[5756]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc2caa0b70 a2=3 a3=0 items=0 ppid=1 pid=5756 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:03:12.940000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:03:12.954733 sshd-session[5756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 12:03:12.970605 systemd-logind[1619]: New session 25 of user core. Jan 24 12:03:12.985950 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 24 12:03:12.992000 audit[5756]: USER_START pid=5756 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:12.995000 audit[5760]: CRED_ACQ pid=5760 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:14.291000 audit[5772]: NETFILTER_CFG table=filter:135 family=2 entries=26 op=nft_register_rule pid=5772 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:03:14.291000 audit[5772]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fffd741a650 a2=0 a3=7fffd741a63c items=0 ppid=3041 pid=5772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:03:14.291000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:03:14.312000 audit[5772]: NETFILTER_CFG table=nat:136 family=2 entries=20 op=nft_register_rule pid=5772 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:03:14.312000 audit[5772]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffd741a650 a2=0 a3=0 items=0 ppid=3041 pid=5772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:03:14.312000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:03:14.336684 sshd[5760]: Connection closed by 10.0.0.1 port 33658 Jan 24 12:03:14.334849 sshd-session[5756]: pam_unix(sshd:session): session closed for user core Jan 24 12:03:14.336000 audit[5756]: USER_END pid=5756 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:14.336000 audit[5756]: CRED_DISP pid=5756 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:14.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.100:22-10.0.0.1:33658 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:03:14.356678 systemd[1]: sshd@23-10.0.0.100:22-10.0.0.1:33658.service: Deactivated successfully. Jan 24 12:03:14.361237 systemd[1]: session-25.scope: Deactivated successfully. Jan 24 12:03:14.367881 systemd-logind[1619]: Session 25 logged out. Waiting for processes to exit. Jan 24 12:03:14.373188 systemd[1]: Started sshd@24-10.0.0.100:22-10.0.0.1:33670.service - OpenSSH per-connection server daemon (10.0.0.1:33670). Jan 24 12:03:14.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.100:22-10.0.0.1:33670 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:03:14.375621 systemd-logind[1619]: Removed session 25. Jan 24 12:03:14.522000 audit[5777]: USER_ACCT pid=5777 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:14.523775 sshd[5777]: Accepted publickey for core from 10.0.0.1 port 33670 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 12:03:14.525000 audit[5777]: CRED_ACQ pid=5777 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:14.525000 audit[5777]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea725a810 a2=3 a3=0 items=0 ppid=1 pid=5777 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:03:14.525000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:03:14.528244 sshd-session[5777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 12:03:14.543136 systemd-logind[1619]: New session 26 of user core. Jan 24 12:03:14.557725 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 24 12:03:14.567000 audit[5777]: USER_START pid=5777 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:14.571000 audit[5781]: CRED_ACQ pid=5781 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:15.177286 sshd[5781]: Connection closed by 10.0.0.1 port 33670 Jan 24 12:03:15.177841 sshd-session[5777]: pam_unix(sshd:session): session closed for user core Jan 24 12:03:15.184000 audit[5777]: USER_END pid=5777 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:15.184000 audit[5777]: CRED_DISP pid=5777 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:15.207878 systemd[1]: sshd@24-10.0.0.100:22-10.0.0.1:33670.service: Deactivated successfully. Jan 24 12:03:15.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.100:22-10.0.0.1:33670 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:03:15.215391 systemd[1]: session-26.scope: Deactivated successfully. Jan 24 12:03:15.217787 systemd-logind[1619]: Session 26 logged out. Waiting for processes to exit. Jan 24 12:03:15.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.100:22-10.0.0.1:33686 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:03:15.230029 systemd[1]: Started sshd@25-10.0.0.100:22-10.0.0.1:33686.service - OpenSSH per-connection server daemon (10.0.0.1:33686). Jan 24 12:03:15.232279 systemd-logind[1619]: Removed session 26. Jan 24 12:03:15.336000 audit[5792]: USER_ACCT pid=5792 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:15.338000 audit[5792]: CRED_ACQ pid=5792 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:15.339000 audit[5792]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff76806060 a2=3 a3=0 items=0 ppid=1 pid=5792 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:03:15.339000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:03:15.340061 sshd[5792]: Accepted publickey for core from 10.0.0.1 port 33686 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 12:03:15.341775 sshd-session[5792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 12:03:15.361133 systemd-logind[1619]: New session 27 of user core. Jan 24 12:03:15.366000 audit[5797]: NETFILTER_CFG table=filter:137 family=2 entries=38 op=nft_register_rule pid=5797 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:03:15.366000 audit[5797]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe5adbeae0 a2=0 a3=7ffe5adbeacc items=0 ppid=3041 pid=5797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:03:15.366000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:03:15.368427 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 24 12:03:15.376000 audit[5797]: NETFILTER_CFG table=nat:138 family=2 entries=20 op=nft_register_rule pid=5797 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:03:15.377000 audit[5792]: USER_START pid=5792 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:15.376000 audit[5797]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe5adbeae0 a2=0 a3=0 items=0 ppid=3041 pid=5797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:03:15.376000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:03:15.381000 audit[5798]: CRED_ACQ pid=5798 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:15.667815 sshd[5798]: Connection closed by 10.0.0.1 port 33686 Jan 24 12:03:15.670832 sshd-session[5792]: pam_unix(sshd:session): session closed for user core Jan 24 12:03:15.676000 audit[5792]: USER_END pid=5792 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:15.677000 audit[5792]: CRED_DISP pid=5792 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:15.682467 systemd-logind[1619]: Session 27 logged out. Waiting for processes to exit. Jan 24 12:03:15.684882 systemd[1]: sshd@25-10.0.0.100:22-10.0.0.1:33686.service: Deactivated successfully. Jan 24 12:03:15.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.100:22-10.0.0.1:33686 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:03:15.692001 systemd[1]: session-27.scope: Deactivated successfully. Jan 24 12:03:15.702459 systemd-logind[1619]: Removed session 27. Jan 24 12:03:15.899463 kubelet[2876]: E0124 12:03:15.899338 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-576w8" podUID="3477849f-ef62-42dc-be46-c8edc5b93ccb" Jan 24 12:03:18.893196 kubelet[2876]: E0124 12:03:18.893101 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-869c797fbb-hltgx" podUID="ec18082a-49e1-4173-9b94-153e655a0861" Jan 24 12:03:20.706398 kernel: kauditd_printk_skb: 57 callbacks suppressed Jan 24 12:03:20.706535 kernel: audit: type=1130 audit(1769256200.701:904): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.100:22-10.0.0.1:33692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:03:20.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.100:22-10.0.0.1:33692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:03:20.702061 systemd[1]: Started sshd@26-10.0.0.100:22-10.0.0.1:33692.service - OpenSSH per-connection server daemon (10.0.0.1:33692). Jan 24 12:03:20.894691 kubelet[2876]: E0124 12:03:20.894080 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-k2xcd" podUID="f944553c-3de6-4dea-af30-6e177d6839ad" Jan 24 12:03:20.896000 audit[5812]: USER_ACCT pid=5812 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:20.898618 sshd[5812]: Accepted publickey for core from 10.0.0.1 port 33692 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 12:03:20.906992 sshd-session[5812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 12:03:20.901000 audit[5812]: CRED_ACQ pid=5812 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:20.920774 kernel: audit: type=1101 audit(1769256200.896:905): pid=5812 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:20.920832 kernel: audit: type=1103 audit(1769256200.901:906): pid=5812 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:20.934668 systemd-logind[1619]: New session 28 of user core. Jan 24 12:03:20.942442 kernel: audit: type=1006 audit(1769256200.901:907): pid=5812 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jan 24 12:03:20.942528 kernel: audit: type=1300 audit(1769256200.901:907): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd5bdd3720 a2=3 a3=0 items=0 ppid=1 pid=5812 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:03:20.901000 audit[5812]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd5bdd3720 a2=3 a3=0 items=0 ppid=1 pid=5812 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:03:20.960699 kernel: audit: type=1327 audit(1769256200.901:907): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:03:20.901000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:03:20.975487 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 24 12:03:21.016517 kernel: audit: type=1105 audit(1769256200.986:908): pid=5812 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:20.986000 audit[5812]: USER_START pid=5812 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:20.992000 audit[5816]: CRED_ACQ pid=5816 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:21.042117 kernel: audit: type=1103 audit(1769256200.992:909): pid=5816 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:21.240911 sshd[5816]: Connection closed by 10.0.0.1 port 33692 Jan 24 12:03:21.241504 sshd-session[5812]: pam_unix(sshd:session): session closed for user core Jan 24 12:03:21.246000 audit[5812]: USER_END pid=5812 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:21.256162 systemd[1]: sshd@26-10.0.0.100:22-10.0.0.1:33692.service: Deactivated successfully. Jan 24 12:03:21.260704 systemd[1]: session-28.scope: Deactivated successfully. Jan 24 12:03:21.267861 systemd-logind[1619]: Session 28 logged out. Waiting for processes to exit. Jan 24 12:03:21.247000 audit[5812]: CRED_DISP pid=5812 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:21.274933 systemd-logind[1619]: Removed session 28. Jan 24 12:03:21.291443 kernel: audit: type=1106 audit(1769256201.246:910): pid=5812 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:21.291632 kernel: audit: type=1104 audit(1769256201.247:911): pid=5812 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:21.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.100:22-10.0.0.1:33692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:03:21.893012 kubelet[2876]: E0124 12:03:21.892958 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d9dddf448-n9r2d" podUID="dd704f6f-5a9f-42a8-93d9-5d24176bfd82" Jan 24 12:03:24.891192 kubelet[2876]: E0124 12:03:24.889828 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76997bfb4b-ggwxc" podUID="543ea964-5bd2-4a2a-be7e-5b64397ea1f6" Jan 24 12:03:24.899094 kubelet[2876]: E0124 12:03:24.898437 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76997bfb4b-55f79" podUID="7e54efbd-6a62-4db3-8b3c-99aa330f72d1" Jan 24 12:03:26.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.100:22-10.0.0.1:52654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:03:26.280972 systemd[1]: Started sshd@27-10.0.0.100:22-10.0.0.1:52654.service - OpenSSH per-connection server daemon (10.0.0.1:52654). Jan 24 12:03:26.283699 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 12:03:26.284036 kernel: audit: type=1130 audit(1769256206.280:913): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.100:22-10.0.0.1:52654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:03:26.456000 audit[5830]: USER_ACCT pid=5830 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:26.458732 sshd[5830]: Accepted publickey for core from 10.0.0.1 port 52654 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 12:03:26.491736 kernel: audit: type=1101 audit(1769256206.456:914): pid=5830 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:26.491856 kernel: audit: type=1103 audit(1769256206.468:915): pid=5830 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:26.468000 audit[5830]: CRED_ACQ pid=5830 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:26.479415 sshd-session[5830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 12:03:26.510245 kernel: audit: type=1006 audit(1769256206.468:916): pid=5830 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Jan 24 12:03:26.510513 kernel: audit: type=1300 audit(1769256206.468:916): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcb041f0c0 a2=3 a3=0 items=0 ppid=1 pid=5830 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:03:26.468000 audit[5830]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcb041f0c0 a2=3 a3=0 items=0 ppid=1 pid=5830 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:03:26.507411 systemd-logind[1619]: New session 29 of user core. Jan 24 12:03:26.468000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:03:26.538903 kernel: audit: type=1327 audit(1769256206.468:916): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:03:26.543992 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 24 12:03:26.557000 audit[5830]: USER_START pid=5830 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:26.585104 kernel: audit: type=1105 audit(1769256206.557:917): pid=5830 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:26.585244 kernel: audit: type=1103 audit(1769256206.571:918): pid=5834 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:26.571000 audit[5834]: CRED_ACQ pid=5834 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:26.774294 sshd[5834]: Connection closed by 10.0.0.1 port 52654 Jan 24 12:03:26.774869 sshd-session[5830]: pam_unix(sshd:session): session closed for user core Jan 24 12:03:26.776000 audit[5830]: USER_END pid=5830 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:26.785810 systemd[1]: sshd@27-10.0.0.100:22-10.0.0.1:52654.service: Deactivated successfully. Jan 24 12:03:26.790015 systemd[1]: session-29.scope: Deactivated successfully. Jan 24 12:03:26.792862 kernel: audit: type=1106 audit(1769256206.776:919): pid=5830 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:26.793460 kernel: audit: type=1104 audit(1769256206.776:920): pid=5830 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:26.776000 audit[5830]: CRED_DISP pid=5830 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:26.792782 systemd-logind[1619]: Session 29 logged out. Waiting for processes to exit. Jan 24 12:03:26.797014 systemd-logind[1619]: Removed session 29. Jan 24 12:03:26.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.100:22-10.0.0.1:52654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:03:26.898468 kubelet[2876]: E0124 12:03:26.898108 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-576w8" podUID="3477849f-ef62-42dc-be46-c8edc5b93ccb" Jan 24 12:03:29.896688 kubelet[2876]: E0124 12:03:29.895410 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-869c797fbb-hltgx" podUID="ec18082a-49e1-4173-9b94-153e655a0861" Jan 24 12:03:31.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.100:22-10.0.0.1:52658 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:03:31.816192 systemd[1]: Started sshd@28-10.0.0.100:22-10.0.0.1:52658.service - OpenSSH per-connection server daemon (10.0.0.1:52658). Jan 24 12:03:31.852973 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 12:03:31.853091 kernel: audit: type=1130 audit(1769256211.817:922): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.100:22-10.0.0.1:52658 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:03:32.017000 audit[5850]: USER_ACCT pid=5850 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:32.043207 kernel: audit: type=1101 audit(1769256212.017:923): pid=5850 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:32.066599 sshd[5850]: Accepted publickey for core from 10.0.0.1 port 52658 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 12:03:32.103000 audit[5850]: CRED_ACQ pid=5850 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:32.134887 kernel: audit: type=1103 audit(1769256212.103:924): pid=5850 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:32.141121 sshd-session[5850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 12:03:32.112000 audit[5850]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff81b08400 a2=3 a3=0 items=0 ppid=1 pid=5850 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:03:32.169645 systemd-logind[1619]: New session 30 of user core. Jan 24 12:03:32.194885 kernel: audit: type=1006 audit(1769256212.112:925): pid=5850 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=30 res=1 Jan 24 12:03:32.202425 kernel: audit: type=1300 audit(1769256212.112:925): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff81b08400 a2=3 a3=0 items=0 ppid=1 pid=5850 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:03:32.203020 kernel: audit: type=1327 audit(1769256212.112:925): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:03:32.112000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:03:32.204866 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 24 12:03:32.222000 audit[5850]: USER_START pid=5850 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:32.250473 kernel: audit: type=1105 audit(1769256212.222:926): pid=5850 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:32.250652 kernel: audit: type=1103 audit(1769256212.246:927): pid=5854 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:32.246000 audit[5854]: CRED_ACQ pid=5854 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:32.468109 sshd[5854]: Connection closed by 10.0.0.1 port 52658 Jan 24 12:03:32.474775 sshd-session[5850]: pam_unix(sshd:session): session closed for user core Jan 24 12:03:32.476000 audit[5850]: USER_END pid=5850 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:32.491947 systemd[1]: sshd@28-10.0.0.100:22-10.0.0.1:52658.service: Deactivated successfully. Jan 24 12:03:32.500972 systemd-logind[1619]: Session 30 logged out. Waiting for processes to exit. Jan 24 12:03:32.506879 kernel: audit: type=1106 audit(1769256212.476:928): pid=5850 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:32.517116 systemd[1]: session-30.scope: Deactivated successfully. Jan 24 12:03:32.477000 audit[5850]: CRED_DISP pid=5850 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:32.529001 systemd-logind[1619]: Removed session 30. Jan 24 12:03:32.546604 kernel: audit: type=1104 audit(1769256212.477:929): pid=5850 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:32.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.100:22-10.0.0.1:52658 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:03:32.895037 kubelet[2876]: E0124 12:03:32.894945 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-k2xcd" podUID="f944553c-3de6-4dea-af30-6e177d6839ad" Jan 24 12:03:32.895783 kubelet[2876]: E0124 12:03:32.895051 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d9dddf448-n9r2d" podUID="dd704f6f-5a9f-42a8-93d9-5d24176bfd82" Jan 24 12:03:36.889610 kubelet[2876]: E0124 12:03:36.889314 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76997bfb4b-55f79" podUID="7e54efbd-6a62-4db3-8b3c-99aa330f72d1" Jan 24 12:03:37.489026 systemd[1]: Started sshd@29-10.0.0.100:22-10.0.0.1:59566.service - OpenSSH per-connection server daemon (10.0.0.1:59566). Jan 24 12:03:37.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.100:22-10.0.0.1:59566 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:03:37.495651 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 12:03:37.496197 kernel: audit: type=1130 audit(1769256217.488:931): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.100:22-10.0.0.1:59566 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:03:37.595000 audit[5867]: USER_ACCT pid=5867 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:37.599996 sshd-session[5867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 12:03:37.601739 sshd[5867]: Accepted publickey for core from 10.0.0.1 port 59566 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 12:03:37.611338 systemd-logind[1619]: New session 31 of user core. Jan 24 12:03:37.625618 kernel: audit: type=1101 audit(1769256217.595:932): pid=5867 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:37.625763 kernel: audit: type=1103 audit(1769256217.597:933): pid=5867 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:37.597000 audit[5867]: CRED_ACQ pid=5867 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:37.634181 kernel: audit: type=1006 audit(1769256217.598:934): pid=5867 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=31 res=1 Jan 24 12:03:37.598000 audit[5867]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc7cf2ceb0 a2=3 a3=0 items=0 ppid=1 pid=5867 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:03:37.649853 kernel: audit: type=1300 audit(1769256217.598:934): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc7cf2ceb0 a2=3 a3=0 items=0 ppid=1 pid=5867 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:03:37.649951 kernel: audit: type=1327 audit(1769256217.598:934): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:03:37.598000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:03:37.659039 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 24 12:03:37.667000 audit[5867]: USER_START pid=5867 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:37.670000 audit[5871]: CRED_ACQ pid=5871 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:37.704696 kernel: audit: type=1105 audit(1769256217.667:935): pid=5867 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:37.704873 kernel: audit: type=1103 audit(1769256217.670:936): pid=5871 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:37.833967 sshd[5871]: Connection closed by 10.0.0.1 port 59566 Jan 24 12:03:37.836836 sshd-session[5867]: pam_unix(sshd:session): session closed for user core Jan 24 12:03:37.841000 audit[5867]: USER_END pid=5867 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:37.841000 audit[5867]: CRED_DISP pid=5867 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:37.864453 systemd[1]: sshd@29-10.0.0.100:22-10.0.0.1:59566.service: Deactivated successfully. Jan 24 12:03:37.869098 systemd[1]: session-31.scope: Deactivated successfully. Jan 24 12:03:37.873651 kernel: audit: type=1106 audit(1769256217.841:937): pid=5867 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:37.874255 kernel: audit: type=1104 audit(1769256217.841:938): pid=5867 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:37.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.100:22-10.0.0.1:59566 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:03:37.874610 systemd-logind[1619]: Session 31 logged out. Waiting for processes to exit. Jan 24 12:03:37.878918 systemd-logind[1619]: Removed session 31. Jan 24 12:03:37.890346 kubelet[2876]: E0124 12:03:37.890292 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76997bfb4b-ggwxc" podUID="543ea964-5bd2-4a2a-be7e-5b64397ea1f6" Jan 24 12:03:37.896063 kubelet[2876]: E0124 12:03:37.895748 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-576w8" podUID="3477849f-ef62-42dc-be46-c8edc5b93ccb" Jan 24 12:03:38.485000 audit[5885]: NETFILTER_CFG table=filter:139 family=2 entries=26 op=nft_register_rule pid=5885 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:03:38.485000 audit[5885]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdc6b99b30 a2=0 a3=7ffdc6b99b1c items=0 ppid=3041 pid=5885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:03:38.485000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:03:38.500000 audit[5885]: NETFILTER_CFG table=nat:140 family=2 entries=104 op=nft_register_chain pid=5885 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 12:03:38.500000 audit[5885]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffdc6b99b30 a2=0 a3=7ffdc6b99b1c items=0 ppid=3041 pid=5885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:03:38.500000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 12:03:40.890687 kubelet[2876]: E0124 12:03:40.890090 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:03:40.896973 kubelet[2876]: E0124 12:03:40.896628 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-869c797fbb-hltgx" podUID="ec18082a-49e1-4173-9b94-153e655a0861" Jan 24 12:03:42.879122 systemd[1]: Started sshd@30-10.0.0.100:22-10.0.0.1:51330.service - OpenSSH per-connection server daemon (10.0.0.1:51330). Jan 24 12:03:42.899207 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 24 12:03:42.899323 kernel: audit: type=1130 audit(1769256222.879:942): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.100:22-10.0.0.1:51330 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:03:42.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.100:22-10.0.0.1:51330 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:03:43.054000 audit[5913]: USER_ACCT pid=5913 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:43.056784 sshd[5913]: Accepted publickey for core from 10.0.0.1 port 51330 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 12:03:43.059477 sshd-session[5913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 12:03:43.075912 systemd-logind[1619]: New session 32 of user core. Jan 24 12:03:43.056000 audit[5913]: CRED_ACQ pid=5913 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:43.126819 kernel: audit: type=1101 audit(1769256223.054:943): pid=5913 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:43.127225 kernel: audit: type=1103 audit(1769256223.056:944): pid=5913 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:43.127283 kernel: audit: type=1006 audit(1769256223.057:945): pid=5913 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=32 res=1 Jan 24 12:03:43.057000 audit[5913]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff026f1f30 a2=3 a3=0 items=0 ppid=1 pid=5913 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:03:43.163632 kernel: audit: type=1300 audit(1769256223.057:945): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff026f1f30 a2=3 a3=0 items=0 ppid=1 pid=5913 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:03:43.165478 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 24 12:03:43.057000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:03:43.176847 kernel: audit: type=1327 audit(1769256223.057:945): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:03:43.192838 kernel: audit: type=1105 audit(1769256223.176:946): pid=5913 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:43.176000 audit[5913]: USER_START pid=5913 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:43.195000 audit[5917]: CRED_ACQ pid=5917 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:43.231636 kernel: audit: type=1103 audit(1769256223.195:947): pid=5917 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:43.540898 sshd[5917]: Connection closed by 10.0.0.1 port 51330 Jan 24 12:03:43.533187 sshd-session[5913]: pam_unix(sshd:session): session closed for user core Jan 24 12:03:43.539500 systemd[1]: sshd@30-10.0.0.100:22-10.0.0.1:51330.service: Deactivated successfully. Jan 24 12:03:43.534000 audit[5913]: USER_END pid=5913 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:43.544970 systemd[1]: session-32.scope: Deactivated successfully. Jan 24 12:03:43.549212 systemd-logind[1619]: Session 32 logged out. Waiting for processes to exit. Jan 24 12:03:43.552712 systemd-logind[1619]: Removed session 32. Jan 24 12:03:43.535000 audit[5913]: CRED_DISP pid=5913 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:43.582397 kernel: audit: type=1106 audit(1769256223.534:948): pid=5913 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:43.583302 kernel: audit: type=1104 audit(1769256223.535:949): pid=5913 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:43.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.100:22-10.0.0.1:51330 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:03:43.893170 kubelet[2876]: E0124 12:03:43.893111 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:03:46.889255 kubelet[2876]: E0124 12:03:46.888773 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 12:03:47.889702 kubelet[2876]: E0124 12:03:47.889650 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d9dddf448-n9r2d" podUID="dd704f6f-5a9f-42a8-93d9-5d24176bfd82" Jan 24 12:03:47.893679 kubelet[2876]: E0124 12:03:47.893532 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-k2xcd" podUID="f944553c-3de6-4dea-af30-6e177d6839ad" Jan 24 12:03:48.548252 systemd[1]: Started sshd@31-10.0.0.100:22-10.0.0.1:51348.service - OpenSSH per-connection server daemon (10.0.0.1:51348). Jan 24 12:03:48.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.100:22-10.0.0.1:51348 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:03:48.561618 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 12:03:48.561751 kernel: audit: type=1130 audit(1769256228.547:951): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.100:22-10.0.0.1:51348 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:03:48.653000 audit[5931]: USER_ACCT pid=5931 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:48.655002 sshd[5931]: Accepted publickey for core from 10.0.0.1 port 51348 ssh2: RSA SHA256:N4DptLu65muvg2RdNP5t6A9jwGknXmCATYE4jszWH64 Jan 24 12:03:48.660539 sshd-session[5931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 12:03:48.655000 audit[5931]: CRED_ACQ pid=5931 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:48.678758 systemd-logind[1619]: New session 33 of user core. Jan 24 12:03:48.698802 kernel: audit: type=1101 audit(1769256228.653:952): pid=5931 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:48.698961 kernel: audit: type=1103 audit(1769256228.655:953): pid=5931 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:48.706614 kernel: audit: type=1006 audit(1769256228.655:954): pid=5931 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=33 res=1 Jan 24 12:03:48.706718 kernel: audit: type=1300 audit(1769256228.655:954): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff5db07b50 a2=3 a3=0 items=0 ppid=1 pid=5931 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:03:48.655000 audit[5931]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff5db07b50 a2=3 a3=0 items=0 ppid=1 pid=5931 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 12:03:48.655000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:03:48.722000 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 24 12:03:48.725665 kernel: audit: type=1327 audit(1769256228.655:954): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 12:03:48.737000 audit[5931]: USER_START pid=5931 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:48.742000 audit[5935]: CRED_ACQ pid=5935 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:48.766144 kernel: audit: type=1105 audit(1769256228.737:955): pid=5931 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:48.766240 kernel: audit: type=1103 audit(1769256228.742:956): pid=5935 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:48.890691 sshd[5935]: Connection closed by 10.0.0.1 port 51348 Jan 24 12:03:48.891281 kubelet[2876]: E0124 12:03:48.890344 2876 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-576w8" podUID="3477849f-ef62-42dc-be46-c8edc5b93ccb" Jan 24 12:03:48.892218 sshd-session[5931]: pam_unix(sshd:session): session closed for user core Jan 24 12:03:48.897000 audit[5931]: USER_END pid=5931 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:48.909208 systemd[1]: sshd@31-10.0.0.100:22-10.0.0.1:51348.service: Deactivated successfully. Jan 24 12:03:48.913945 systemd[1]: session-33.scope: Deactivated successfully. Jan 24 12:03:48.897000 audit[5931]: CRED_DISP pid=5931 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:48.919010 systemd-logind[1619]: Session 33 logged out. Waiting for processes to exit. Jan 24 12:03:48.921909 systemd-logind[1619]: Removed session 33. Jan 24 12:03:48.931254 kernel: audit: type=1106 audit(1769256228.897:957): pid=5931 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:48.931354 kernel: audit: type=1104 audit(1769256228.897:958): pid=5931 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 12:03:48.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.100:22-10.0.0.1:51348 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 12:03:49.888289 kubelet[2876]: E0124 12:03:49.887876 2876 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"